* [dpdk-users] mlx5 VF packet lost between rx_port_unicast_packets and rx_good_packets
@ 2021-07-05 10:07 Yan, Xiaoping (NSB - CN/Hangzhou)
2021-07-13 12:35 ` Asaf Penso
0 siblings, 1 reply; 19+ messages in thread
From: Yan, Xiaoping (NSB - CN/Hangzhou) @ 2021-07-05 10:07 UTC (permalink / raw)
To: users
Hi,
When doing traffic loopback test on a mlx5 VF, we found there are some packet loss (not all packet received back ).
From xstats counters, I found all packets have been received in rx_port_unicast_packets, but rx_good_packets has lower counter, and rx_port_unicast_packets - rx_good_packets = lost packets
i.e. packet lost between rx_port_unicast_packets and rx_good_packets.
But I can not find any other counter indicating where exactly those packets are lost.
Any idea?
Attached is the counter logs. (bf is before the test, af is after the test, fp-cli dpdk-port-stats is the command used to get xstats, and ethtool -S _f1 (the vf used) also printed)
Test equipment reports that it sends: 2911176 packets, receives: 2909474, dropped: 1702
And the xstats (after - before) shows rx_port_unicast_packets 2911177, rx_good_packets 2909475, so drop (2911177 - rx_good_packets) is 1702
BTW, I also noticed this discussion "packet loss between phy and good counter"
http://mails.dpdk.org/archives/users/2018-July/003271.html
but my case seems to be different as packet also received in rx_port_unicast_packets, and I checked counter from pf (ethtool -S ens1f0 in attached log), rx_discards_phy is not increasing.
Thank you.
Best regards
Yan Xiaoping
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [dpdk-users] mlx5 VF packet lost between rx_port_unicast_packets and rx_good_packets
2021-07-05 10:07 [dpdk-users] mlx5 VF packet lost between rx_port_unicast_packets and rx_good_packets Yan, Xiaoping (NSB - CN/Hangzhou)
@ 2021-07-13 12:35 ` Asaf Penso
2021-07-26 4:52 ` Yan, Xiaoping (NSB - CN/Hangzhou)
0 siblings, 1 reply; 19+ messages in thread
From: Asaf Penso @ 2021-07-13 12:35 UTC (permalink / raw)
To: Yan, Xiaoping (NSB - CN/Hangzhou), users
Cc: Slava Ovsiienko, Matan Azrad, Raslan Darawsheh
Hello Yan,
Can you please mention which DPDK version you use and whether you see this issue also with latest upstream version?
Regards,
Asaf Penso
>-----Original Message-----
>From: users <users-bounces@dpdk.org> On Behalf Of Yan, Xiaoping (NSB -
>CN/Hangzhou)
>Sent: Monday, July 5, 2021 1:08 PM
>To: users@dpdk.org
>Subject: [dpdk-users] mlx5 VF packet lost between rx_port_unicast_packets
>and rx_good_packets
>
>Hi,
>
>When doing traffic loopback test on a mlx5 VF, we found there are some
>packet loss (not all packet received back ).
>
>From xstats counters, I found all packets have been received in
>rx_port_unicast_packets, but rx_good_packets has lower counter, and
>rx_port_unicast_packets - rx_good_packets = lost packets i.e. packet lost
>between rx_port_unicast_packets and rx_good_packets.
>But I can not find any other counter indicating where exactly those packets
>are lost.
>
>Any idea?
>
>Attached is the counter logs. (bf is before the test, af is after the test, fp-cli
>dpdk-port-stats is the command used to get xstats, and ethtool -S _f1 (the vf
>used) also printed) Test equipment reports that it sends: 2911176 packets,
>receives: 2909474, dropped: 1702 And the xstats (after - before) shows
>rx_port_unicast_packets 2911177, rx_good_packets 2909475, so drop
>(2911177 - rx_good_packets) is 1702
>
>BTW, I also noticed this discussion "packet loss between phy and good
>counter"
>http://mails.dpdk.org/archives/users/2018-July/003271.html
>but my case seems to be different as packet also received in
>rx_port_unicast_packets, and I checked counter from pf (ethtool -S ens1f0 in
>attached log), rx_discards_phy is not increasing.
>
>Thank you.
>
>Best regards
>Yan Xiaoping
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [dpdk-users] mlx5 VF packet lost between rx_port_unicast_packets and rx_good_packets
2021-07-13 12:35 ` Asaf Penso
@ 2021-07-26 4:52 ` Yan, Xiaoping (NSB - CN/Hangzhou)
[not found] ` <DM8PR12MB54940E42337767B960E6BD28CDA69@DM8PR12MB5494.namprd12.prod.outlook.com>
0 siblings, 1 reply; 19+ messages in thread
From: Yan, Xiaoping (NSB - CN/Hangzhou) @ 2021-07-26 4:52 UTC (permalink / raw)
To: Asaf Penso, users; +Cc: Slava Ovsiienko, Matan Azrad, Raslan Darawsheh
Hi,
dpdk version in use is 19.11
I have not tried with latest upstream version.
It seems performance is affected by IPv6 neighbor advertisement packets coming to this interface
05:20:04.025290 IP6 fe80::6cf1:9fff:fe4e:8a01 > ff02::1: ICMP6, neighbor advertisement, tgt is fe80::6cf1:9fff:fe4e:8a01, length 32
0x0000: 3333 0000 0001 6ef1 9f4e 8a01 86dd 6008
0x0010: fe44 0020 3aff fe80 0000 0000 0000 6cf1
0x0020: 9fff fe4e 8a01 ff02 0000 0000 0000 0000
0x0030: 0000 0000 0001 8800 96d9 2000 0000 fe80
0x0040: 0000 0000 0000 6cf1 9fff fe4e 8a01 0201
0x0050: 6ef1 9f4e 8a01
Somehow, there are about 100 such packets per second coming to the interface, and packet loss happens.
When we change default vlan in switch so that there is no such packets come to the interface (the mlx5 VF under test), there is not packet loss anymore.
In both cases, all packets have arrived to rx_vport_unicast_packets.
In the packet loss case, we see less packets in rx_good_packets (rx_vport_unicast_packets = rx_good_packets + lost packet).
If the dpdk application is too slow to receive all packets from the VF, is there any counter to indicate this?
Any suggestion?
Thank you.
Best regards
Yan Xiaoping
-----Original Message-----
From: Asaf Penso <asafp@nvidia.com>
Sent: 2021年7月13日 20:36
To: Yan, Xiaoping (NSB - CN/Hangzhou) <xiaoping.yan@nokia-sbell.com>; users@dpdk.org
Cc: Slava Ovsiienko <viacheslavo@nvidia.com>; Matan Azrad <matan@nvidia.com>; Raslan Darawsheh <rasland@nvidia.com>
Subject: RE: mlx5 VF packet lost between rx_port_unicast_packets and rx_good_packets
Hello Yan,
Can you please mention which DPDK version you use and whether you see this issue also with latest upstream version?
Regards,
Asaf Penso
>-----Original Message-----
>From: users <users-bounces@dpdk.org<mailto:users-bounces@dpdk.org>> On Behalf Of Yan, Xiaoping (NSB -
>CN/Hangzhou)
>Sent: Monday, July 5, 2021 1:08 PM
>To: users@dpdk.org<mailto:users@dpdk.org>
>Subject: [dpdk-users] mlx5 VF packet lost between
>rx_port_unicast_packets and rx_good_packets
>
>Hi,
>
>When doing traffic loopback test on a mlx5 VF, we found there are some
>packet loss (not all packet received back ).
>
>From xstats counters, I found all packets have been received in
>rx_port_unicast_packets, but rx_good_packets has lower counter, and
>rx_port_unicast_packets - rx_good_packets = lost packets i.e. packet
>lost between rx_port_unicast_packets and rx_good_packets.
>But I can not find any other counter indicating where exactly those
>packets are lost.
>
>Any idea?
>
>Attached is the counter logs. (bf is before the test, af is after the
>test, fp-cli dpdk-port-stats is the command used to get xstats, and
>ethtool -S _f1 (the vf
>used) also printed) Test equipment reports that it sends: 2911176
>packets,
>receives: 2909474, dropped: 1702 And the xstats (after - before) shows
>rx_port_unicast_packets 2911177, rx_good_packets 2909475, so drop
>(2911177 - rx_good_packets) is 1702
>
>BTW, I also noticed this discussion "packet loss between phy and good
>counter"
>http://mails.dpdk.org/archives/users/2018-July/003271.html
>but my case seems to be different as packet also received in
>rx_port_unicast_packets, and I checked counter from pf (ethtool -S
>ens1f0 in attached log), rx_discards_phy is not increasing.
>
>Thank you.
>
>Best regards
>Yan Xiaoping
^ permalink raw reply [flat|nested] 19+ messages in thread
* RE: mlx5 VF packet lost between rx_port_unicast_packets and rx_good_packets
[not found] ` <4b0dd266b53541c7bc4964c29e24a0e6@nokia-sbell.com>
@ 2021-09-30 8:05 ` Yan, Xiaoping (NSB - CN/Hangzhou)
2021-10-14 6:55 ` Asaf Penso
0 siblings, 1 reply; 19+ messages in thread
From: Yan, Xiaoping (NSB - CN/Hangzhou) @ 2021-09-30 8:05 UTC (permalink / raw)
To: Asaf Penso, users; +Cc: Slava Ovsiienko, Matan Azrad, Raslan Darawsheh
Hi,
In below log, we can clearly see packets are dropped between counter rx_unicast_packets and rx_good_packets
But there is not any error/miss counter tell why/where packet is dropped.
Is this a known bug/limitation of Mellanox card?
Any suggestion?
Counter in test center(traffic generator):
Tx count: 617496152
Rx count: 617475672
Drop: 20480
testpmd started with:
dpdk-testpmd -l "2,3" --legacy-mem --socket-mem "5000,0" -a 0000:03:07.0 -- -i --nb-cores=1 --portmask=0x1 --rxd=512 --txd=512
testpmd> port stop 0
testpmd> vlan set filter on 0
testpmd> rx_vlan add 767 0
testpmd> port start 0
testpmd> set fwd 5tswap
testpmd> start
testpmd> show fwd stats all
---------------------- Forward statistics for port 0 ----------------------
RX-packets: 617475727 RX-dropped: 0 RX-total: 617475727
TX-packets: 617475727 TX-dropped: 0 TX-total: 617475727
----------------------------------------------------------------------------
+++++++++++++++ Accumulated forward statistics for all ports+++++++++++++++
RX-packets: 617475727 RX-dropped: 0 RX-total: 617475727
TX-packets: 617475727 TX-dropped: 0 TX-total: 617475727
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
testpmd> show port xstats 0
###### NIC extended statistics for port 0
rx_good_packets: 617475731
tx_good_packets: 617475730
rx_good_bytes: 45693207378
tx_good_bytes: 45693207036
rx_missed_errors: 0
rx_errors: 0
tx_errors: 0
rx_mbuf_allocation_errors: 0
rx_q0_packets: 617475731
rx_q0_bytes: 45693207378
rx_q0_errors: 0
tx_q0_packets: 617475730
tx_q0_bytes: 45693207036
rx_wqe_errors: 0
rx_unicast_packets: 617496152
rx_unicast_bytes: 45694715248
tx_unicast_packets: 617475730
tx_unicast_bytes: 45693207036
rx_multicast_packets: 3
rx_multicast_bytes: 342
tx_multicast_packets: 0
tx_multicast_bytes: 0
rx_broadcast_packets: 56
rx_broadcast_bytes: 7308
tx_broadcast_packets: 0
tx_broadcast_bytes: 0
tx_phy_packets: 0
rx_phy_packets: 0
rx_phy_crc_errors: 0
tx_phy_bytes: 0
rx_phy_bytes: 0
rx_phy_in_range_len_errors: 0
rx_phy_symbol_errors: 0
rx_phy_discard_packets: 0
tx_phy_discard_packets: 0
tx_phy_errors: 0
rx_out_of_buffer: 0
tx_pp_missed_interrupt_errors: 0
tx_pp_rearm_queue_errors: 0
tx_pp_clock_queue_errors: 0
tx_pp_timestamp_past_errors: 0
tx_pp_timestamp_future_errors: 0
tx_pp_jitter: 0
tx_pp_wander: 0
tx_pp_sync_lost: 0
Best regards
Yan Xiaoping
From: Yan, Xiaoping (NSB - CN/Hangzhou)
Sent: 2021年9月29日 16:26
To: 'Asaf Penso' <asafp@nvidia.com<mailto:asafp@nvidia.com>>
Cc: 'Slava Ovsiienko' <viacheslavo@nvidia.com<mailto:viacheslavo@nvidia.com>>; 'Matan Azrad' <matan@nvidia.com<mailto:matan@nvidia.com>>; 'Raslan Darawsheh' <rasland@nvidia.com<mailto:rasland@nvidia.com>>; Xu, Meng-Maggie (NSB - CN/Hangzhou) <meng-maggie.xu@nokia-sbell.com<mailto:meng-maggie.xu@nokia-sbell.com>>
Subject: RE: mlx5 VF packet lost between rx_port_unicast_packets and rx_good_packets
Hi,
We replaced the NIC also (originally it was cx-4, now it is cx-5), but result is the same.
Do you know why the packet is dropped between rx_port_unicast_packets and rx_good_packets, but there is no error/miss counter?
And do you know mlx5_xxx kernel thread?
They have cpu affinity to all cpu cores, including the core used by fastpath/testpmd.
Would it affect?
[cranuser1@hztt24f-rm17-ocp-sno-1 ~]$ taskset -cp 74548
pid 74548's current affinity list: 0-27
[cranuser1@hztt24f-rm17-ocp-sno-1 ~]$ ps -emo pid,tid,psr,comm | grep mlx5
903 - - mlx5_health0000
904 - - mlx5_page_alloc
907 - - mlx5_cmd_0000:0
916 - - mlx5_events
917 - - mlx5_esw_wq
918 - - mlx5_fw_tracer
919 - - mlx5_hv_vhca
921 - - mlx5_fc
924 - - mlx5_health0000
925 - - mlx5_page_alloc
927 - - mlx5_cmd_0000:0
935 - - mlx5_events
936 - - mlx5_esw_wq
937 - - mlx5_fw_tracer
938 - - mlx5_hv_vhca
939 - - mlx5_fc
941 - - mlx5_health0000
942 - - mlx5_page_alloc
Best regards
Yan Xiaoping
From: Yan, Xiaoping (NSB - CN/Hangzhou)
Sent: 2021年9月29日 15:03
To: 'Asaf Penso' <asafp@nvidia.com<mailto:asafp@nvidia.com>>
Cc: Slava Ovsiienko <viacheslavo@nvidia.com<mailto:viacheslavo@nvidia.com>>; Matan Azrad <matan@nvidia.com<mailto:matan@nvidia.com>>; Raslan Darawsheh <rasland@nvidia.com<mailto:rasland@nvidia.com>>; Xu, Meng-Maggie (NSB - CN/Hangzhou) <meng-maggie.xu@nokia-sbell.com<mailto:meng-maggie.xu@nokia-sbell.com>>
Subject: RE: mlx5 VF packet lost between rx_port_unicast_packets and rx_good_packets
Hi,
It is 20.11 (We upgraded to 20.11 recently).
Best regards
Yan Xiaoping
From: Asaf Penso <asafp@nvidia.com<mailto:asafp@nvidia.com>>
Sent: 2021年9月29日 14:47
To: Yan, Xiaoping (NSB - CN/Hangzhou) <xiaoping.yan@nokia-sbell.com<mailto:xiaoping.yan@nokia-sbell.com>>
Cc: Slava Ovsiienko <viacheslavo@nvidia.com<mailto:viacheslavo@nvidia.com>>; Matan Azrad <matan@nvidia.com<mailto:matan@nvidia.com>>; Raslan Darawsheh <rasland@nvidia.com<mailto:rasland@nvidia.com>>; Xu, Meng-Maggie (NSB - CN/Hangzhou) <meng-maggie.xu@nokia-sbell.com<mailto:meng-maggie.xu@nokia-sbell.com>>
Subject: Re: mlx5 VF packet lost between rx_port_unicast_packets and rx_good_packets
What dpdk version are you using?
19.11 doesn't support 5tswap mode in testpmd.
Regards,
Asaf Penso
________________________________
From: Yan, Xiaoping (NSB - CN/Hangzhou) <xiaoping.yan@nokia-sbell.com<mailto:xiaoping.yan@nokia-sbell.com>>
Sent: Monday, September 27, 2021 5:55:21 AM
To: Asaf Penso <asafp@nvidia.com<mailto:asafp@nvidia.com>>
Cc: Slava Ovsiienko <viacheslavo@nvidia.com<mailto:viacheslavo@nvidia.com>>; Matan Azrad <matan@nvidia.com<mailto:matan@nvidia.com>>; Raslan Darawsheh <rasland@nvidia.com<mailto:rasland@nvidia.com>>; Xu, Meng-Maggie (NSB - CN/Hangzhou) <meng-maggie.xu@nokia-sbell.com<mailto:meng-maggie.xu@nokia-sbell.com>>
Subject: RE: mlx5 VF packet lost between rx_port_unicast_packets and rx_good_packets
Hi,
I tried also with testpmd with such command and configuration:
dpdk-testpmd -l "4,5" --legacy-mem --socket-mem "5000,0" -a 0000:03:02.0 -- -i --nb-cores=1 --portmask=0x1 --rxd=512 --txd=512
testpmd> port stop 0
testpmd> vlan set filter on 0
testpmd> rx_vlan add 767 0
testpmd> port start 0
testpmd> set fwd 5tswap
testpmd> start
it only gets 1.4mpps.
with 1.5mpps, it starts to drop packets occasionally.
Best regards
Yan Xiaoping
From: Yan, Xiaoping (NSB - CN/Hangzhou)
Sent: 2021年9月26日 13:19
To: 'Asaf Penso' <asafp@nvidia.com<mailto:asafp@nvidia.com>>
Cc: Slava Ovsiienko <viacheslavo@nvidia.com<mailto:viacheslavo@nvidia.com>>; Matan Azrad <matan@nvidia.com<mailto:matan@nvidia.com>>; Raslan Darawsheh <rasland@nvidia.com<mailto:rasland@nvidia.com>>; Xu, Meng-Maggie (NSB - CN/Hangzhou) <meng-maggie.xu@nokia-sbell.com<mailto:meng-maggie.xu@nokia-sbell.com>>
Subject: RE: mlx5 VF packet lost between rx_port_unicast_packets and rx_good_packets
Hi,
I was using 6wind fastpath instead of testpmd.
>> Do you configure any flow?
I think not, but is there any command to check?
>> Do you work in isolate mode?
Do you mean the CPU?
The dpdk application (6wind fastpath) run inside container and it is using CPU core from exclusive pool<https://github.com/nokia/CPU-Pooler>
On the otherhand, the cpu isolation is done by host infrastructure and a bit complicated, I’m not sure if there is really no any other task run in this core.
BTW, we recently switched the host infra to redhat openshift container platform, and same problem is there…
We can get 1.6mpps with intel 810 NIC, but we can only gets 1mpps for mlx.
I raised also a ticket to mellanox Support
https://support.mellanox.com/s/case/5001T00001ZC0jzQAD
There is log about cpu affinity, and some mlx5_xxx threads seems strange to me…
Can you please also check the ticket?
Best regards
Yan Xiaoping
From: Asaf Penso <asafp@nvidia.com<mailto:asafp@nvidia.com>>
Sent: 2021年9月26日 12:57
To: Yan, Xiaoping (NSB - CN/Hangzhou) <xiaoping.yan@nokia-sbell.com<mailto:xiaoping.yan@nokia-sbell.com>>
Cc: Slava Ovsiienko <viacheslavo@nvidia.com<mailto:viacheslavo@nvidia.com>>; Matan Azrad <matan@nvidia.com<mailto:matan@nvidia.com>>; Raslan Darawsheh <rasland@nvidia.com<mailto:rasland@nvidia.com>>
Subject: RE: mlx5 VF packet lost between rx_port_unicast_packets and rx_good_packets
Hi,
Could you please share the testpmd command line you are using?
Do you configure any flow? Do you work in isolate mode?
Regards,
Asaf Penso
From: Yan, Xiaoping (NSB - CN/Hangzhou) <xiaoping.yan@nokia-sbell.com<mailto:xiaoping.yan@nokia-sbell.com>>
Sent: Monday, July 26, 2021 7:52 AM
To: Asaf Penso <asafp@nvidia.com<mailto:asafp@nvidia.com>>; users@dpdk.org<mailto:users@dpdk.org>
Cc: Slava Ovsiienko <viacheslavo@nvidia.com<mailto:viacheslavo@nvidia.com>>; Matan Azrad <matan@nvidia.com<mailto:matan@nvidia.com>>; Raslan Darawsheh <rasland@nvidia.com<mailto:rasland@nvidia.com>>
Subject: RE: mlx5 VF packet lost between rx_port_unicast_packets and rx_good_packets
Hi,
dpdk version in use is 19.11
I have not tried with latest upstream version.
It seems performance is affected by IPv6 neighbor advertisement packets coming to this interface
05:20:04.025290 IP6 fe80::6cf1:9fff:fe4e:8a01 > ff02::1: ICMP6, neighbor advertisement, tgt is fe80::6cf1:9fff:fe4e:8a01, length 32
0x0000: 3333 0000 0001 6ef1 9f4e 8a01 86dd 6008
0x0010: fe44 0020 3aff fe80 0000 0000 0000 6cf1
0x0020: 9fff fe4e 8a01 ff02 0000 0000 0000 0000
0x0030: 0000 0000 0001 8800 96d9 2000 0000 fe80
0x0040: 0000 0000 0000 6cf1 9fff fe4e 8a01 0201
0x0050: 6ef1 9f4e 8a01
Somehow, there are about 100 such packets per second coming to the interface, and packet loss happens.
When we change default vlan in switch so that there is no such packets come to the interface (the mlx5 VF under test), there is not packet loss anymore.
In both cases, all packets have arrived to rx_vport_unicast_packets.
In the packet loss case, we see less packets in rx_good_packets (rx_vport_unicast_packets = rx_good_packets + lost packet).
If the dpdk application is too slow to receive all packets from the VF, is there any counter to indicate this?
Any suggestion?
Thank you.
Best regards
Yan Xiaoping
-----Original Message-----
From: Asaf Penso <asafp@nvidia.com<mailto:asafp@nvidia.com>>
Sent: 2021年7月13日 20:36
To: Yan, Xiaoping (NSB - CN/Hangzhou) <xiaoping.yan@nokia-sbell.com<mailto:xiaoping.yan@nokia-sbell.com>>; users@dpdk.org<mailto:users@dpdk.org>
Cc: Slava Ovsiienko <viacheslavo@nvidia.com<mailto:viacheslavo@nvidia.com>>; Matan Azrad <matan@nvidia.com<mailto:matan@nvidia.com>>; Raslan Darawsheh <rasland@nvidia.com<mailto:rasland@nvidia.com>>
Subject: RE: mlx5 VF packet lost between rx_port_unicast_packets and rx_good_packets
Hello Yan,
Can you please mention which DPDK version you use and whether you see this issue also with latest upstream version?
Regards,
Asaf Penso
>-----Original Message-----
>From: users <users-bounces@dpdk.org<mailto:users-bounces@dpdk.org>> On Behalf Of Yan, Xiaoping (NSB -
>CN/Hangzhou)
>Sent: Monday, July 5, 2021 1:08 PM
>To: users@dpdk.org<mailto:users@dpdk.org>
>Subject: [dpdk-users] mlx5 VF packet lost between
>rx_port_unicast_packets and rx_good_packets
>
>Hi,
>
>When doing traffic loopback test on a mlx5 VF, we found there are some
>packet loss (not all packet received back ).
>
>From xstats counters, I found all packets have been received in
>rx_port_unicast_packets, but rx_good_packets has lower counter, and
>rx_port_unicast_packets - rx_good_packets = lost packets i.e. packet
>lost between rx_port_unicast_packets and rx_good_packets.
>But I can not find any other counter indicating where exactly those
>packets are lost.
>
>Any idea?
>
>Attached is the counter logs. (bf is before the test, af is after the
>test, fp-cli dpdk-port-stats is the command used to get xstats, and
>ethtool -S _f1 (the vf
>used) also printed) Test equipment reports that it sends: 2911176
>packets,
>receives: 2909474, dropped: 1702 And the xstats (after - before) shows
>rx_port_unicast_packets 2911177, rx_good_packets 2909475, so drop
>(2911177 - rx_good_packets) is 1702
>
>BTW, I also noticed this discussion "packet loss between phy and good
>counter"
>http://mails.dpdk.org/archives/users/2018-July/003271.html
>but my case seems to be different as packet also received in
>rx_port_unicast_packets, and I checked counter from pf (ethtool -S
>ens1f0 in attached log), rx_discards_phy is not increasing.
>
>Thank you.
>
>Best regards
>Yan Xiaoping
^ permalink raw reply [flat|nested] 19+ messages in thread
* RE: mlx5 VF packet lost between rx_port_unicast_packets and rx_good_packets
2021-09-30 8:05 ` Yan, Xiaoping (NSB - CN/Hangzhou)
@ 2021-10-14 6:55 ` Asaf Penso
2021-10-14 9:33 ` Yan, Xiaoping (NSB - CN/Hangzhou)
0 siblings, 1 reply; 19+ messages in thread
From: Asaf Penso @ 2021-10-14 6:55 UTC (permalink / raw)
To: Yan, Xiaoping (NSB - CN/Hangzhou), users
Cc: Slava Ovsiienko, Matan Azrad, Raslan Darawsheh
[-- Attachment #1: Type: text/plain, Size: 13868 bytes --]
Are you using the latest stable 20.11.3? If not, can you try?
Regards,
Asaf Penso
From: Yan, Xiaoping (NSB - CN/Hangzhou) <xiaoping.yan@nokia-sbell.com>
Sent: Thursday, September 30, 2021 11:05 AM
To: Asaf Penso <asafp@nvidia.com>; users@dpdk.org
Cc: Slava Ovsiienko <viacheslavo@nvidia.com>; Matan Azrad <matan@nvidia.com>; Raslan Darawsheh <rasland@nvidia.com>
Subject: RE: mlx5 VF packet lost between rx_port_unicast_packets and rx_good_packets
Hi,
In below log, we can clearly see packets are dropped between counter rx_unicast_packets and rx_good_packets
But there is not any error/miss counter tell why/where packet is dropped.
Is this a known bug/limitation of Mellanox card?
Any suggestion?
Counter in test center(traffic generator):
Tx count: 617496152
Rx count: 617475672
Drop: 20480
testpmd started with:
dpdk-testpmd -l "2,3" --legacy-mem --socket-mem "5000,0" -a 0000:03:07.0 -- -i --nb-cores=1 --portmask=0x1 --rxd=512 --txd=512
testpmd> port stop 0
testpmd> vlan set filter on 0
testpmd> rx_vlan add 767 0
testpmd> port start 0
testpmd> set fwd 5tswap
testpmd> start
testpmd> show fwd stats all
---------------------- Forward statistics for port 0 ----------------------
RX-packets: 617475727 RX-dropped: 0 RX-total: 617475727
TX-packets: 617475727 TX-dropped: 0 TX-total: 617475727
----------------------------------------------------------------------------
+++++++++++++++ Accumulated forward statistics for all ports+++++++++++++++
RX-packets: 617475727 RX-dropped: 0 RX-total: 617475727
TX-packets: 617475727 TX-dropped: 0 TX-total: 617475727
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
testpmd> show port xstats 0
###### NIC extended statistics for port 0
rx_good_packets: 617475731
tx_good_packets: 617475730
rx_good_bytes: 45693207378
tx_good_bytes: 45693207036
rx_missed_errors: 0
rx_errors: 0
tx_errors: 0
rx_mbuf_allocation_errors: 0
rx_q0_packets: 617475731
rx_q0_bytes: 45693207378
rx_q0_errors: 0
tx_q0_packets: 617475730
tx_q0_bytes: 45693207036
rx_wqe_errors: 0
rx_unicast_packets: 617496152
rx_unicast_bytes: 45694715248
tx_unicast_packets: 617475730
tx_unicast_bytes: 45693207036
rx_multicast_packets: 3
rx_multicast_bytes: 342
tx_multicast_packets: 0
tx_multicast_bytes: 0
rx_broadcast_packets: 56
rx_broadcast_bytes: 7308
tx_broadcast_packets: 0
tx_broadcast_bytes: 0
tx_phy_packets: 0
rx_phy_packets: 0
rx_phy_crc_errors: 0
tx_phy_bytes: 0
rx_phy_bytes: 0
rx_phy_in_range_len_errors: 0
rx_phy_symbol_errors: 0
rx_phy_discard_packets: 0
tx_phy_discard_packets: 0
tx_phy_errors: 0
rx_out_of_buffer: 0
tx_pp_missed_interrupt_errors: 0
tx_pp_rearm_queue_errors: 0
tx_pp_clock_queue_errors: 0
tx_pp_timestamp_past_errors: 0
tx_pp_timestamp_future_errors: 0
tx_pp_jitter: 0
tx_pp_wander: 0
tx_pp_sync_lost: 0
Best regards
Yan Xiaoping
From: Yan, Xiaoping (NSB - CN/Hangzhou)
Sent: 2021年9月29日 16:26
To: 'Asaf Penso' <asafp@nvidia.com<mailto:asafp@nvidia.com>>
Cc: 'Slava Ovsiienko' <viacheslavo@nvidia.com<mailto:viacheslavo@nvidia.com>>; 'Matan Azrad' <matan@nvidia.com<mailto:matan@nvidia.com>>; 'Raslan Darawsheh' <rasland@nvidia.com<mailto:rasland@nvidia.com>>; Xu, Meng-Maggie (NSB - CN/Hangzhou) <meng-maggie.xu@nokia-sbell.com<mailto:meng-maggie.xu@nokia-sbell.com>>
Subject: RE: mlx5 VF packet lost between rx_port_unicast_packets and rx_good_packets
Hi,
We replaced the NIC also (originally it was cx-4, now it is cx-5), but result is the same.
Do you know why the packet is dropped between rx_port_unicast_packets and rx_good_packets, but there is no error/miss counter?
And do you know mlx5_xxx kernel thread?
They have cpu affinity to all cpu cores, including the core used by fastpath/testpmd.
Would it affect?
[cranuser1@hztt24f-rm17-ocp-sno-1 ~]$ taskset -cp 74548
pid 74548's current affinity list: 0-27
[cranuser1@hztt24f-rm17-ocp-sno-1 ~]$ ps -emo pid,tid,psr,comm | grep mlx5
903 - - mlx5_health0000
904 - - mlx5_page_alloc
907 - - mlx5_cmd_0000:0
916 - - mlx5_events
917 - - mlx5_esw_wq
918 - - mlx5_fw_tracer
919 - - mlx5_hv_vhca
921 - - mlx5_fc
924 - - mlx5_health0000
925 - - mlx5_page_alloc
927 - - mlx5_cmd_0000:0
935 - - mlx5_events
936 - - mlx5_esw_wq
937 - - mlx5_fw_tracer
938 - - mlx5_hv_vhca
939 - - mlx5_fc
941 - - mlx5_health0000
942 - - mlx5_page_alloc
Best regards
Yan Xiaoping
From: Yan, Xiaoping (NSB - CN/Hangzhou)
Sent: 2021年9月29日 15:03
To: 'Asaf Penso' <asafp@nvidia.com<mailto:asafp@nvidia.com>>
Cc: Slava Ovsiienko <viacheslavo@nvidia.com<mailto:viacheslavo@nvidia.com>>; Matan Azrad <matan@nvidia.com<mailto:matan@nvidia.com>>; Raslan Darawsheh <rasland@nvidia.com<mailto:rasland@nvidia.com>>; Xu, Meng-Maggie (NSB - CN/Hangzhou) <meng-maggie.xu@nokia-sbell.com<mailto:meng-maggie.xu@nokia-sbell.com>>
Subject: RE: mlx5 VF packet lost between rx_port_unicast_packets and rx_good_packets
Hi,
It is 20.11 (We upgraded to 20.11 recently).
Best regards
Yan Xiaoping
From: Asaf Penso <asafp@nvidia.com<mailto:asafp@nvidia.com>>
Sent: 2021年9月29日 14:47
To: Yan, Xiaoping (NSB - CN/Hangzhou) <xiaoping.yan@nokia-sbell.com<mailto:xiaoping.yan@nokia-sbell.com>>
Cc: Slava Ovsiienko <viacheslavo@nvidia.com<mailto:viacheslavo@nvidia.com>>; Matan Azrad <matan@nvidia.com<mailto:matan@nvidia.com>>; Raslan Darawsheh <rasland@nvidia.com<mailto:rasland@nvidia.com>>; Xu, Meng-Maggie (NSB - CN/Hangzhou) <meng-maggie.xu@nokia-sbell.com<mailto:meng-maggie.xu@nokia-sbell.com>>
Subject: Re: mlx5 VF packet lost between rx_port_unicast_packets and rx_good_packets
What dpdk version are you using?
19.11 doesn't support 5tswap mode in testpmd.
Regards,
Asaf Penso
________________________________
From: Yan, Xiaoping (NSB - CN/Hangzhou) <xiaoping.yan@nokia-sbell.com<mailto:xiaoping.yan@nokia-sbell.com>>
Sent: Monday, September 27, 2021 5:55:21 AM
To: Asaf Penso <asafp@nvidia.com<mailto:asafp@nvidia.com>>
Cc: Slava Ovsiienko <viacheslavo@nvidia.com<mailto:viacheslavo@nvidia.com>>; Matan Azrad <matan@nvidia.com<mailto:matan@nvidia.com>>; Raslan Darawsheh <rasland@nvidia.com<mailto:rasland@nvidia.com>>; Xu, Meng-Maggie (NSB - CN/Hangzhou) <meng-maggie.xu@nokia-sbell.com<mailto:meng-maggie.xu@nokia-sbell.com>>
Subject: RE: mlx5 VF packet lost between rx_port_unicast_packets and rx_good_packets
Hi,
I tried also with testpmd with such command and configuration:
dpdk-testpmd -l "4,5" --legacy-mem --socket-mem "5000,0" -a 0000:03:02.0 -- -i --nb-cores=1 --portmask=0x1 --rxd=512 --txd=512
testpmd> port stop 0
testpmd> vlan set filter on 0
testpmd> rx_vlan add 767 0
testpmd> port start 0
testpmd> set fwd 5tswap
testpmd> start
it only gets 1.4mpps.
with 1.5mpps, it starts to drop packets occasionally.
Best regards
Yan Xiaoping
From: Yan, Xiaoping (NSB - CN/Hangzhou)
Sent: 2021年9月26日 13:19
To: 'Asaf Penso' <asafp@nvidia.com<mailto:asafp@nvidia.com>>
Cc: Slava Ovsiienko <viacheslavo@nvidia.com<mailto:viacheslavo@nvidia.com>>; Matan Azrad <matan@nvidia.com<mailto:matan@nvidia.com>>; Raslan Darawsheh <rasland@nvidia.com<mailto:rasland@nvidia.com>>; Xu, Meng-Maggie (NSB - CN/Hangzhou) <meng-maggie.xu@nokia-sbell.com<mailto:meng-maggie.xu@nokia-sbell.com>>
Subject: RE: mlx5 VF packet lost between rx_port_unicast_packets and rx_good_packets
Hi,
I was using 6wind fastpath instead of testpmd.
>> Do you configure any flow?
I think not, but is there any command to check?
>> Do you work in isolate mode?
Do you mean the CPU?
The dpdk application (6wind fastpath) run inside container and it is using CPU core from exclusive pool<https://github.com/nokia/CPU-Pooler>
On the otherhand, the cpu isolation is done by host infrastructure and a bit complicated, I’m not sure if there is really no any other task run in this core.
BTW, we recently switched the host infra to redhat openshift container platform, and same problem is there…
We can get 1.6mpps with intel 810 NIC, but we can only gets 1mpps for mlx.
I raised also a ticket to mellanox Support
https://support.mellanox.com/s/case/5001T00001ZC0jzQAD
There is log about cpu affinity, and some mlx5_xxx threads seems strange to me…
Can you please also check the ticket?
Best regards
Yan Xiaoping
From: Asaf Penso <asafp@nvidia.com<mailto:asafp@nvidia.com>>
Sent: 2021年9月26日 12:57
To: Yan, Xiaoping (NSB - CN/Hangzhou) <xiaoping.yan@nokia-sbell.com<mailto:xiaoping.yan@nokia-sbell.com>>
Cc: Slava Ovsiienko <viacheslavo@nvidia.com<mailto:viacheslavo@nvidia.com>>; Matan Azrad <matan@nvidia.com<mailto:matan@nvidia.com>>; Raslan Darawsheh <rasland@nvidia.com<mailto:rasland@nvidia.com>>
Subject: RE: mlx5 VF packet lost between rx_port_unicast_packets and rx_good_packets
Hi,
Could you please share the testpmd command line you are using?
Do you configure any flow? Do you work in isolate mode?
Regards,
Asaf Penso
From: Yan, Xiaoping (NSB - CN/Hangzhou) <xiaoping.yan@nokia-sbell.com<mailto:xiaoping.yan@nokia-sbell.com>>
Sent: Monday, July 26, 2021 7:52 AM
To: Asaf Penso <asafp@nvidia.com<mailto:asafp@nvidia.com>>; users@dpdk.org<mailto:users@dpdk.org>
Cc: Slava Ovsiienko <viacheslavo@nvidia.com<mailto:viacheslavo@nvidia.com>>; Matan Azrad <matan@nvidia.com<mailto:matan@nvidia.com>>; Raslan Darawsheh <rasland@nvidia.com<mailto:rasland@nvidia.com>>
Subject: RE: mlx5 VF packet lost between rx_port_unicast_packets and rx_good_packets
Hi,
dpdk version in use is 19.11
I have not tried with latest upstream version.
It seems performance is affected by IPv6 neighbor advertisement packets coming to this interface
05:20:04.025290 IP6 fe80::6cf1:9fff:fe4e:8a01 > ff02::1: ICMP6, neighbor advertisement, tgt is fe80::6cf1:9fff:fe4e:8a01, length 32
0x0000: 3333 0000 0001 6ef1 9f4e 8a01 86dd 6008
0x0010: fe44 0020 3aff fe80 0000 0000 0000 6cf1
0x0020: 9fff fe4e 8a01 ff02 0000 0000 0000 0000
0x0030: 0000 0000 0001 8800 96d9 2000 0000 fe80
0x0040: 0000 0000 0000 6cf1 9fff fe4e 8a01 0201
0x0050: 6ef1 9f4e 8a01
Somehow, there are about 100 such packets per second coming to the interface, and packet loss happens.
When we change default vlan in switch so that there is no such packets come to the interface (the mlx5 VF under test), there is not packet loss anymore.
In both cases, all packets have arrived to rx_vport_unicast_packets.
In the packet loss case, we see less packets in rx_good_packets (rx_vport_unicast_packets = rx_good_packets + lost packet).
If the dpdk application is too slow to receive all packets from the VF, is there any counter to indicate this?
Any suggestion?
Thank you.
Best regards
Yan Xiaoping
-----Original Message-----
From: Asaf Penso <asafp@nvidia.com<mailto:asafp@nvidia.com>>
Sent: 2021年7月13日 20:36
To: Yan, Xiaoping (NSB - CN/Hangzhou) <xiaoping.yan@nokia-sbell.com<mailto:xiaoping.yan@nokia-sbell.com>>; users@dpdk.org<mailto:users@dpdk.org>
Cc: Slava Ovsiienko <viacheslavo@nvidia.com<mailto:viacheslavo@nvidia.com>>; Matan Azrad <matan@nvidia.com<mailto:matan@nvidia.com>>; Raslan Darawsheh <rasland@nvidia.com<mailto:rasland@nvidia.com>>
Subject: RE: mlx5 VF packet lost between rx_port_unicast_packets and rx_good_packets
Hello Yan,
Can you please mention which DPDK version you use and whether you see this issue also with latest upstream version?
Regards,
Asaf Penso
>-----Original Message-----
>From: users <users-bounces@dpdk.org<mailto:users-bounces@dpdk.org>> On Behalf Of Yan, Xiaoping (NSB -
>CN/Hangzhou)
>Sent: Monday, July 5, 2021 1:08 PM
>To: users@dpdk.org<mailto:users@dpdk.org>
>Subject: [dpdk-users] mlx5 VF packet lost between
>rx_port_unicast_packets and rx_good_packets
>
>Hi,
>
>When doing traffic loopback test on a mlx5 VF, we found there are some
>packet loss (not all packet received back ).
>
>From xstats counters, I found all packets have been received in
>rx_port_unicast_packets, but rx_good_packets has lower counter, and
>rx_port_unicast_packets - rx_good_packets = lost packets i.e. packet
>lost between rx_port_unicast_packets and rx_good_packets.
>But I can not find any other counter indicating where exactly those
>packets are lost.
>
>Any idea?
>
>Attached is the counter logs. (bf is before the test, af is after the
>test, fp-cli dpdk-port-stats is the command used to get xstats, and
>ethtool -S _f1 (the vf
>used) also printed) Test equipment reports that it sends: 2911176
>packets,
>receives: 2909474, dropped: 1702 And the xstats (after - before) shows
>rx_port_unicast_packets 2911177, rx_good_packets 2909475, so drop
>(2911177 - rx_good_packets) is 1702
>
>BTW, I also noticed this discussion "packet loss between phy and good
>counter"
>http://mails.dpdk.org/archives/users/2018-July/003271.html
>but my case seems to be different as packet also received in
>rx_port_unicast_packets, and I checked counter from pf (ethtool -S
>ens1f0 in attached log), rx_discards_phy is not increasing.
>
>Thank you.
>
>Best regards
>Yan Xiaoping
[-- Attachment #2: Type: text/html, Size: 65654 bytes --]
^ permalink raw reply [flat|nested] 19+ messages in thread
* RE: mlx5 VF packet lost between rx_port_unicast_packets and rx_good_packets
2021-10-14 6:55 ` Asaf Penso
@ 2021-10-14 9:33 ` Yan, Xiaoping (NSB - CN/Hangzhou)
2021-10-14 9:50 ` Asaf Penso
0 siblings, 1 reply; 19+ messages in thread
From: Yan, Xiaoping (NSB - CN/Hangzhou) @ 2021-10-14 9:33 UTC (permalink / raw)
To: Asaf Penso, users; +Cc: Slava Ovsiienko, Matan Azrad, Raslan Darawsheh
Hi,
I’m using 20.11
commit b1d36cf828771e28eb0130b59dcf606c2a0bc94d (HEAD, tag: v20.11)
Author: Thomas Monjalon <thomas@monjalon.net>
Date: Fri Nov 27 19:48:48 2020 +0100
version: 20.11.0
Best regards
Yan Xiaoping
From: Asaf Penso <asafp@nvidia.com>
Sent: 2021年10月14日 14:56
To: Yan, Xiaoping (NSB - CN/Hangzhou) <xiaoping.yan@nokia-sbell.com>; users@dpdk.org
Cc: Slava Ovsiienko <viacheslavo@nvidia.com>; Matan Azrad <matan@nvidia.com>; Raslan Darawsheh <rasland@nvidia.com>
Subject: RE: mlx5 VF packet lost between rx_port_unicast_packets and rx_good_packets
Are you using the latest stable 20.11.3? If not, can you try?
Regards,
Asaf Penso
From: Yan, Xiaoping (NSB - CN/Hangzhou) <xiaoping.yan@nokia-sbell.com<mailto:xiaoping.yan@nokia-sbell.com>>
Sent: Thursday, September 30, 2021 11:05 AM
To: Asaf Penso <asafp@nvidia.com<mailto:asafp@nvidia.com>>; users@dpdk.org<mailto:users@dpdk.org>
Cc: Slava Ovsiienko <viacheslavo@nvidia.com<mailto:viacheslavo@nvidia.com>>; Matan Azrad <matan@nvidia.com<mailto:matan@nvidia.com>>; Raslan Darawsheh <rasland@nvidia.com<mailto:rasland@nvidia.com>>
Subject: RE: mlx5 VF packet lost between rx_port_unicast_packets and rx_good_packets
Hi,
In below log, we can clearly see packets are dropped between counter rx_unicast_packets and rx_good_packets
But there is not any error/miss counter tell why/where packet is dropped.
Is this a known bug/limitation of Mellanox card?
Any suggestion?
Counter in test center(traffic generator):
Tx count: 617496152
Rx count: 617475672
Drop: 20480
testpmd started with:
dpdk-testpmd -l "2,3" --legacy-mem --socket-mem "5000,0" -a 0000:03:07.0 -- -i --nb-cores=1 --portmask=0x1 --rxd=512 --txd=512
testpmd> port stop 0
testpmd> vlan set filter on 0
testpmd> rx_vlan add 767 0
testpmd> port start 0
testpmd> set fwd 5tswap
testpmd> start
testpmd> show fwd stats all
---------------------- Forward statistics for port 0 ----------------------
RX-packets: 617475727 RX-dropped: 0 RX-total: 617475727
TX-packets: 617475727 TX-dropped: 0 TX-total: 617475727
----------------------------------------------------------------------------
+++++++++++++++ Accumulated forward statistics for all ports+++++++++++++++
RX-packets: 617475727 RX-dropped: 0 RX-total: 617475727
TX-packets: 617475727 TX-dropped: 0 TX-total: 617475727
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
testpmd> show port xstats 0
###### NIC extended statistics for port 0
rx_good_packets: 617475731
tx_good_packets: 617475730
rx_good_bytes: 45693207378
tx_good_bytes: 45693207036
rx_missed_errors: 0
rx_errors: 0
tx_errors: 0
rx_mbuf_allocation_errors: 0
rx_q0_packets: 617475731
rx_q0_bytes: 45693207378
rx_q0_errors: 0
tx_q0_packets: 617475730
tx_q0_bytes: 45693207036
rx_wqe_errors: 0
rx_unicast_packets: 617496152
rx_unicast_bytes: 45694715248
tx_unicast_packets: 617475730
tx_unicast_bytes: 45693207036
rx_multicast_packets: 3
rx_multicast_bytes: 342
tx_multicast_packets: 0
tx_multicast_bytes: 0
rx_broadcast_packets: 56
rx_broadcast_bytes: 7308
tx_broadcast_packets: 0
tx_broadcast_bytes: 0
tx_phy_packets: 0
rx_phy_packets: 0
rx_phy_crc_errors: 0
tx_phy_bytes: 0
rx_phy_bytes: 0
rx_phy_in_range_len_errors: 0
rx_phy_symbol_errors: 0
rx_phy_discard_packets: 0
tx_phy_discard_packets: 0
tx_phy_errors: 0
rx_out_of_buffer: 0
tx_pp_missed_interrupt_errors: 0
tx_pp_rearm_queue_errors: 0
tx_pp_clock_queue_errors: 0
tx_pp_timestamp_past_errors: 0
tx_pp_timestamp_future_errors: 0
tx_pp_jitter: 0
tx_pp_wander: 0
tx_pp_sync_lost: 0
Best regards
Yan Xiaoping
From: Yan, Xiaoping (NSB - CN/Hangzhou)
Sent: 2021年9月29日 16:26
To: 'Asaf Penso' <asafp@nvidia.com<mailto:asafp@nvidia.com>>
Cc: 'Slava Ovsiienko' <viacheslavo@nvidia.com<mailto:viacheslavo@nvidia.com>>; 'Matan Azrad' <matan@nvidia.com<mailto:matan@nvidia.com>>; 'Raslan Darawsheh' <rasland@nvidia.com<mailto:rasland@nvidia.com>>; Xu, Meng-Maggie (NSB - CN/Hangzhou) <meng-maggie.xu@nokia-sbell.com<mailto:meng-maggie.xu@nokia-sbell.com>>
Subject: RE: mlx5 VF packet lost between rx_port_unicast_packets and rx_good_packets
Hi,
We replaced the NIC also (originally it was cx-4, now it is cx-5), but result is the same.
Do you know why the packet is dropped between rx_port_unicast_packets and rx_good_packets, but there is no error/miss counter?
And do you know mlx5_xxx kernel thread?
They have cpu affinity to all cpu cores, including the core used by fastpath/testpmd.
Would it affect?
[cranuser1@hztt24f-rm17-ocp-sno-1 ~]$ taskset -cp 74548
pid 74548's current affinity list: 0-27
[cranuser1@hztt24f-rm17-ocp-sno-1 ~]$ ps -emo pid,tid,psr,comm | grep mlx5
903 - - mlx5_health0000
904 - - mlx5_page_alloc
907 - - mlx5_cmd_0000:0
916 - - mlx5_events
917 - - mlx5_esw_wq
918 - - mlx5_fw_tracer
919 - - mlx5_hv_vhca
921 - - mlx5_fc
924 - - mlx5_health0000
925 - - mlx5_page_alloc
927 - - mlx5_cmd_0000:0
935 - - mlx5_events
936 - - mlx5_esw_wq
937 - - mlx5_fw_tracer
938 - - mlx5_hv_vhca
939 - - mlx5_fc
941 - - mlx5_health0000
942 - - mlx5_page_alloc
Best regards
Yan Xiaoping
From: Yan, Xiaoping (NSB - CN/Hangzhou)
Sent: 2021年9月29日 15:03
To: 'Asaf Penso' <asafp@nvidia.com<mailto:asafp@nvidia.com>>
Cc: Slava Ovsiienko <viacheslavo@nvidia.com<mailto:viacheslavo@nvidia.com>>; Matan Azrad <matan@nvidia.com<mailto:matan@nvidia.com>>; Raslan Darawsheh <rasland@nvidia.com<mailto:rasland@nvidia.com>>; Xu, Meng-Maggie (NSB - CN/Hangzhou) <meng-maggie.xu@nokia-sbell.com<mailto:meng-maggie.xu@nokia-sbell.com>>
Subject: RE: mlx5 VF packet lost between rx_port_unicast_packets and rx_good_packets
Hi,
It is 20.11 (We upgraded to 20.11 recently).
Best regards
Yan Xiaoping
From: Asaf Penso <asafp@nvidia.com<mailto:asafp@nvidia.com>>
Sent: 2021年9月29日 14:47
To: Yan, Xiaoping (NSB - CN/Hangzhou) <xiaoping.yan@nokia-sbell.com<mailto:xiaoping.yan@nokia-sbell.com>>
Cc: Slava Ovsiienko <viacheslavo@nvidia.com<mailto:viacheslavo@nvidia.com>>; Matan Azrad <matan@nvidia.com<mailto:matan@nvidia.com>>; Raslan Darawsheh <rasland@nvidia.com<mailto:rasland@nvidia.com>>; Xu, Meng-Maggie (NSB - CN/Hangzhou) <meng-maggie.xu@nokia-sbell.com<mailto:meng-maggie.xu@nokia-sbell.com>>
Subject: Re: mlx5 VF packet lost between rx_port_unicast_packets and rx_good_packets
What dpdk version are you using?
19.11 doesn't support 5tswap mode in testpmd.
Regards,
Asaf Penso
________________________________
From: Yan, Xiaoping (NSB - CN/Hangzhou) <xiaoping.yan@nokia-sbell.com<mailto:xiaoping.yan@nokia-sbell.com>>
Sent: Monday, September 27, 2021 5:55:21 AM
To: Asaf Penso <asafp@nvidia.com<mailto:asafp@nvidia.com>>
Cc: Slava Ovsiienko <viacheslavo@nvidia.com<mailto:viacheslavo@nvidia.com>>; Matan Azrad <matan@nvidia.com<mailto:matan@nvidia.com>>; Raslan Darawsheh <rasland@nvidia.com<mailto:rasland@nvidia.com>>; Xu, Meng-Maggie (NSB - CN/Hangzhou) <meng-maggie.xu@nokia-sbell.com<mailto:meng-maggie.xu@nokia-sbell.com>>
Subject: RE: mlx5 VF packet lost between rx_port_unicast_packets and rx_good_packets
Hi,
I tried also with testpmd with such command and configuration:
dpdk-testpmd -l "4,5" --legacy-mem --socket-mem "5000,0" -a 0000:03:02.0 -- -i --nb-cores=1 --portmask=0x1 --rxd=512 --txd=512
testpmd> port stop 0
testpmd> vlan set filter on 0
testpmd> rx_vlan add 767 0
testpmd> port start 0
testpmd> set fwd 5tswap
testpmd> start
it only gets 1.4mpps.
with 1.5mpps, it starts to drop packets occasionally.
Best regards
Yan Xiaoping
From: Yan, Xiaoping (NSB - CN/Hangzhou)
Sent: 2021年9月26日 13:19
To: 'Asaf Penso' <asafp@nvidia.com<mailto:asafp@nvidia.com>>
Cc: Slava Ovsiienko <viacheslavo@nvidia.com<mailto:viacheslavo@nvidia.com>>; Matan Azrad <matan@nvidia.com<mailto:matan@nvidia.com>>; Raslan Darawsheh <rasland@nvidia.com<mailto:rasland@nvidia.com>>; Xu, Meng-Maggie (NSB - CN/Hangzhou) <meng-maggie.xu@nokia-sbell.com<mailto:meng-maggie.xu@nokia-sbell.com>>
Subject: RE: mlx5 VF packet lost between rx_port_unicast_packets and rx_good_packets
Hi,
I was using 6wind fastpath instead of testpmd.
>> Do you configure any flow?
I think not, but is there any command to check?
>> Do you work in isolate mode?
Do you mean the CPU?
The dpdk application (6wind fastpath) run inside container and it is using CPU core from exclusive pool<https://github.com/nokia/CPU-Pooler>
On the otherhand, the cpu isolation is done by host infrastructure and a bit complicated, I’m not sure if there is really no any other task run in this core.
BTW, we recently switched the host infra to redhat openshift container platform, and same problem is there…
We can get 1.6mpps with intel 810 NIC, but we can only gets 1mpps for mlx.
I raised also a ticket to mellanox Support
https://support.mellanox.com/s/case/5001T00001ZC0jzQAD
There is log about cpu affinity, and some mlx5_xxx threads seems strange to me…
Can you please also check the ticket?
Best regards
Yan Xiaoping
From: Asaf Penso <asafp@nvidia.com<mailto:asafp@nvidia.com>>
Sent: 2021年9月26日 12:57
To: Yan, Xiaoping (NSB - CN/Hangzhou) <xiaoping.yan@nokia-sbell.com<mailto:xiaoping.yan@nokia-sbell.com>>
Cc: Slava Ovsiienko <viacheslavo@nvidia.com<mailto:viacheslavo@nvidia.com>>; Matan Azrad <matan@nvidia.com<mailto:matan@nvidia.com>>; Raslan Darawsheh <rasland@nvidia.com<mailto:rasland@nvidia.com>>
Subject: RE: mlx5 VF packet lost between rx_port_unicast_packets and rx_good_packets
Hi,
Could you please share the testpmd command line you are using?
Do you configure any flow? Do you work in isolate mode?
Regards,
Asaf Penso
From: Yan, Xiaoping (NSB - CN/Hangzhou) <xiaoping.yan@nokia-sbell.com<mailto:xiaoping.yan@nokia-sbell.com>>
Sent: Monday, July 26, 2021 7:52 AM
To: Asaf Penso <asafp@nvidia.com<mailto:asafp@nvidia.com>>; users@dpdk.org<mailto:users@dpdk.org>
Cc: Slava Ovsiienko <viacheslavo@nvidia.com<mailto:viacheslavo@nvidia.com>>; Matan Azrad <matan@nvidia.com<mailto:matan@nvidia.com>>; Raslan Darawsheh <rasland@nvidia.com<mailto:rasland@nvidia.com>>
Subject: RE: mlx5 VF packet lost between rx_port_unicast_packets and rx_good_packets
Hi,
dpdk version in use is 19.11
I have not tried with latest upstream version.
It seems performance is affected by IPv6 neighbor advertisement packets coming to this interface
05:20:04.025290 IP6 fe80::6cf1:9fff:fe4e:8a01 > ff02::1: ICMP6, neighbor advertisement, tgt is fe80::6cf1:9fff:fe4e:8a01, length 32
0x0000: 3333 0000 0001 6ef1 9f4e 8a01 86dd 6008
0x0010: fe44 0020 3aff fe80 0000 0000 0000 6cf1
0x0020: 9fff fe4e 8a01 ff02 0000 0000 0000 0000
0x0030: 0000 0000 0001 8800 96d9 2000 0000 fe80
0x0040: 0000 0000 0000 6cf1 9fff fe4e 8a01 0201
0x0050: 6ef1 9f4e 8a01
Somehow, there are about 100 such packets per second coming to the interface, and packet loss happens.
When we change default vlan in switch so that there is no such packets come to the interface (the mlx5 VF under test), there is not packet loss anymore.
In both cases, all packets have arrived to rx_vport_unicast_packets.
In the packet loss case, we see less packets in rx_good_packets (rx_vport_unicast_packets = rx_good_packets + lost packet).
If the dpdk application is too slow to receive all packets from the VF, is there any counter to indicate this?
Any suggestion?
Thank you.
Best regards
Yan Xiaoping
-----Original Message-----
From: Asaf Penso <asafp@nvidia.com<mailto:asafp@nvidia.com>>
Sent: 2021年7月13日 20:36
To: Yan, Xiaoping (NSB - CN/Hangzhou) <xiaoping.yan@nokia-sbell.com<mailto:xiaoping.yan@nokia-sbell.com>>; users@dpdk.org<mailto:users@dpdk.org>
Cc: Slava Ovsiienko <viacheslavo@nvidia.com<mailto:viacheslavo@nvidia.com>>; Matan Azrad <matan@nvidia.com<mailto:matan@nvidia.com>>; Raslan Darawsheh <rasland@nvidia.com<mailto:rasland@nvidia.com>>
Subject: RE: mlx5 VF packet lost between rx_port_unicast_packets and rx_good_packets
Hello Yan,
Can you please mention which DPDK version you use and whether you see this issue also with latest upstream version?
Regards,
Asaf Penso
>-----Original Message-----
>From: users <users-bounces@dpdk.org<mailto:users-bounces@dpdk.org>> On Behalf Of Yan, Xiaoping (NSB -
>CN/Hangzhou)
>Sent: Monday, July 5, 2021 1:08 PM
>To: users@dpdk.org<mailto:users@dpdk.org>
>Subject: [dpdk-users] mlx5 VF packet lost between
>rx_port_unicast_packets and rx_good_packets
>
>Hi,
>
>When doing traffic loopback test on a mlx5 VF, we found there are some
>packet loss (not all packet received back ).
>
>From xstats counters, I found all packets have been received in
>rx_port_unicast_packets, but rx_good_packets has lower counter, and
>rx_port_unicast_packets - rx_good_packets = lost packets i.e. packet
>lost between rx_port_unicast_packets and rx_good_packets.
>But I can not find any other counter indicating where exactly those
>packets are lost.
>
>Any idea?
>
>Attached is the counter logs. (bf is before the test, af is after the
>test, fp-cli dpdk-port-stats is the command used to get xstats, and
>ethtool -S _f1 (the vf
>used) also printed) Test equipment reports that it sends: 2911176
>packets,
>receives: 2909474, dropped: 1702 And the xstats (after - before) shows
>rx_port_unicast_packets 2911177, rx_good_packets 2909475, so drop
>(2911177 - rx_good_packets) is 1702
>
>BTW, I also noticed this discussion "packet loss between phy and good
>counter"
>http://mails.dpdk.org/archives/users/2018-July/003271.html
>but my case seems to be different as packet also received in
>rx_port_unicast_packets, and I checked counter from pf (ethtool -S
>ens1f0 in attached log), rx_discards_phy is not increasing.
>
>Thank you.
>
>Best regards
>Yan Xiaoping
^ permalink raw reply [flat|nested] 19+ messages in thread
* RE: mlx5 VF packet lost between rx_port_unicast_packets and rx_good_packets
2021-10-14 9:33 ` Yan, Xiaoping (NSB - CN/Hangzhou)
@ 2021-10-14 9:50 ` Asaf Penso
2021-10-14 10:15 ` Yan, Xiaoping (NSB - CN/Hangzhou)
0 siblings, 1 reply; 19+ messages in thread
From: Asaf Penso @ 2021-10-14 9:50 UTC (permalink / raw)
To: Yan, Xiaoping (NSB - CN/Hangzhou), users
Cc: Slava Ovsiienko, Matan Azrad, Raslan Darawsheh
[-- Attachment #1: Type: text/plain, Size: 15358 bytes --]
Can you please try the last LTS 20.11.3?
We have some related fixes and we think the issue is already solved.
Regards,
Asaf Penso
From: Yan, Xiaoping (NSB - CN/Hangzhou) <xiaoping.yan@nokia-sbell.com>
Sent: Thursday, October 14, 2021 12:33 PM
To: Asaf Penso <asafp@nvidia.com>; users@dpdk.org
Cc: Slava Ovsiienko <viacheslavo@nvidia.com>; Matan Azrad <matan@nvidia.com>; Raslan Darawsheh <rasland@nvidia.com>
Subject: RE: mlx5 VF packet lost between rx_port_unicast_packets and rx_good_packets
Hi,
I’m using 20.11
commit b1d36cf828771e28eb0130b59dcf606c2a0bc94d (HEAD, tag: v20.11)
Author: Thomas Monjalon <thomas@monjalon.net<mailto:thomas@monjalon.net>>
Date: Fri Nov 27 19:48:48 2020 +0100
version: 20.11.0
Best regards
Yan Xiaoping
From: Asaf Penso <asafp@nvidia.com<mailto:asafp@nvidia.com>>
Sent: 2021年10月14日 14:56
To: Yan, Xiaoping (NSB - CN/Hangzhou) <xiaoping.yan@nokia-sbell.com<mailto:xiaoping.yan@nokia-sbell.com>>; users@dpdk.org<mailto:users@dpdk.org>
Cc: Slava Ovsiienko <viacheslavo@nvidia.com<mailto:viacheslavo@nvidia.com>>; Matan Azrad <matan@nvidia.com<mailto:matan@nvidia.com>>; Raslan Darawsheh <rasland@nvidia.com<mailto:rasland@nvidia.com>>
Subject: RE: mlx5 VF packet lost between rx_port_unicast_packets and rx_good_packets
Are you using the latest stable 20.11.3? If not, can you try?
Regards,
Asaf Penso
From: Yan, Xiaoping (NSB - CN/Hangzhou) <xiaoping.yan@nokia-sbell.com<mailto:xiaoping.yan@nokia-sbell.com>>
Sent: Thursday, September 30, 2021 11:05 AM
To: Asaf Penso <asafp@nvidia.com<mailto:asafp@nvidia.com>>; users@dpdk.org<mailto:users@dpdk.org>
Cc: Slava Ovsiienko <viacheslavo@nvidia.com<mailto:viacheslavo@nvidia.com>>; Matan Azrad <matan@nvidia.com<mailto:matan@nvidia.com>>; Raslan Darawsheh <rasland@nvidia.com<mailto:rasland@nvidia.com>>
Subject: RE: mlx5 VF packet lost between rx_port_unicast_packets and rx_good_packets
Hi,
In below log, we can clearly see packets are dropped between counter rx_unicast_packets and rx_good_packets
But there is not any error/miss counter tell why/where packet is dropped.
Is this a known bug/limitation of Mellanox card?
Any suggestion?
Counter in test center(traffic generator):
Tx count: 617496152
Rx count: 617475672
Drop: 20480
testpmd started with:
dpdk-testpmd -l "2,3" --legacy-mem --socket-mem "5000,0" -a 0000:03:07.0 -- -i --nb-cores=1 --portmask=0x1 --rxd=512 --txd=512
testpmd> port stop 0
testpmd> vlan set filter on 0
testpmd> rx_vlan add 767 0
testpmd> port start 0
testpmd> set fwd 5tswap
testpmd> start
testpmd> show fwd stats all
---------------------- Forward statistics for port 0 ----------------------
RX-packets: 617475727 RX-dropped: 0 RX-total: 617475727
TX-packets: 617475727 TX-dropped: 0 TX-total: 617475727
----------------------------------------------------------------------------
+++++++++++++++ Accumulated forward statistics for all ports+++++++++++++++
RX-packets: 617475727 RX-dropped: 0 RX-total: 617475727
TX-packets: 617475727 TX-dropped: 0 TX-total: 617475727
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
testpmd> show port xstats 0
###### NIC extended statistics for port 0
rx_good_packets: 617475731
tx_good_packets: 617475730
rx_good_bytes: 45693207378
tx_good_bytes: 45693207036
rx_missed_errors: 0
rx_errors: 0
tx_errors: 0
rx_mbuf_allocation_errors: 0
rx_q0_packets: 617475731
rx_q0_bytes: 45693207378
rx_q0_errors: 0
tx_q0_packets: 617475730
tx_q0_bytes: 45693207036
rx_wqe_errors: 0
rx_unicast_packets: 617496152
rx_unicast_bytes: 45694715248
tx_unicast_packets: 617475730
tx_unicast_bytes: 45693207036
rx_multicast_packets: 3
rx_multicast_bytes: 342
tx_multicast_packets: 0
tx_multicast_bytes: 0
rx_broadcast_packets: 56
rx_broadcast_bytes: 7308
tx_broadcast_packets: 0
tx_broadcast_bytes: 0
tx_phy_packets: 0
rx_phy_packets: 0
rx_phy_crc_errors: 0
tx_phy_bytes: 0
rx_phy_bytes: 0
rx_phy_in_range_len_errors: 0
rx_phy_symbol_errors: 0
rx_phy_discard_packets: 0
tx_phy_discard_packets: 0
tx_phy_errors: 0
rx_out_of_buffer: 0
tx_pp_missed_interrupt_errors: 0
tx_pp_rearm_queue_errors: 0
tx_pp_clock_queue_errors: 0
tx_pp_timestamp_past_errors: 0
tx_pp_timestamp_future_errors: 0
tx_pp_jitter: 0
tx_pp_wander: 0
tx_pp_sync_lost: 0
Best regards
Yan Xiaoping
From: Yan, Xiaoping (NSB - CN/Hangzhou)
Sent: 2021年9月29日 16:26
To: 'Asaf Penso' <asafp@nvidia.com<mailto:asafp@nvidia.com>>
Cc: 'Slava Ovsiienko' <viacheslavo@nvidia.com<mailto:viacheslavo@nvidia.com>>; 'Matan Azrad' <matan@nvidia.com<mailto:matan@nvidia.com>>; 'Raslan Darawsheh' <rasland@nvidia.com<mailto:rasland@nvidia.com>>; Xu, Meng-Maggie (NSB - CN/Hangzhou) <meng-maggie.xu@nokia-sbell.com<mailto:meng-maggie.xu@nokia-sbell.com>>
Subject: RE: mlx5 VF packet lost between rx_port_unicast_packets and rx_good_packets
Hi,
We replaced the NIC also (originally it was cx-4, now it is cx-5), but result is the same.
Do you know why the packet is dropped between rx_port_unicast_packets and rx_good_packets, but there is no error/miss counter?
And do you know mlx5_xxx kernel thread?
They have cpu affinity to all cpu cores, including the core used by fastpath/testpmd.
Would it affect?
[cranuser1@hztt24f-rm17-ocp-sno-1 ~]$ taskset -cp 74548
pid 74548's current affinity list: 0-27
[cranuser1@hztt24f-rm17-ocp-sno-1 ~]$ ps -emo pid,tid,psr,comm | grep mlx5
903 - - mlx5_health0000
904 - - mlx5_page_alloc
907 - - mlx5_cmd_0000:0
916 - - mlx5_events
917 - - mlx5_esw_wq
918 - - mlx5_fw_tracer
919 - - mlx5_hv_vhca
921 - - mlx5_fc
924 - - mlx5_health0000
925 - - mlx5_page_alloc
927 - - mlx5_cmd_0000:0
935 - - mlx5_events
936 - - mlx5_esw_wq
937 - - mlx5_fw_tracer
938 - - mlx5_hv_vhca
939 - - mlx5_fc
941 - - mlx5_health0000
942 - - mlx5_page_alloc
Best regards
Yan Xiaoping
From: Yan, Xiaoping (NSB - CN/Hangzhou)
Sent: 2021年9月29日 15:03
To: 'Asaf Penso' <asafp@nvidia.com<mailto:asafp@nvidia.com>>
Cc: Slava Ovsiienko <viacheslavo@nvidia.com<mailto:viacheslavo@nvidia.com>>; Matan Azrad <matan@nvidia.com<mailto:matan@nvidia.com>>; Raslan Darawsheh <rasland@nvidia.com<mailto:rasland@nvidia.com>>; Xu, Meng-Maggie (NSB - CN/Hangzhou) <meng-maggie.xu@nokia-sbell.com<mailto:meng-maggie.xu@nokia-sbell.com>>
Subject: RE: mlx5 VF packet lost between rx_port_unicast_packets and rx_good_packets
Hi,
It is 20.11 (We upgraded to 20.11 recently).
Best regards
Yan Xiaoping
From: Asaf Penso <asafp@nvidia.com<mailto:asafp@nvidia.com>>
Sent: 2021年9月29日 14:47
To: Yan, Xiaoping (NSB - CN/Hangzhou) <xiaoping.yan@nokia-sbell.com<mailto:xiaoping.yan@nokia-sbell.com>>
Cc: Slava Ovsiienko <viacheslavo@nvidia.com<mailto:viacheslavo@nvidia.com>>; Matan Azrad <matan@nvidia.com<mailto:matan@nvidia.com>>; Raslan Darawsheh <rasland@nvidia.com<mailto:rasland@nvidia.com>>; Xu, Meng-Maggie (NSB - CN/Hangzhou) <meng-maggie.xu@nokia-sbell.com<mailto:meng-maggie.xu@nokia-sbell.com>>
Subject: Re: mlx5 VF packet lost between rx_port_unicast_packets and rx_good_packets
What dpdk version are you using?
19.11 doesn't support 5tswap mode in testpmd.
Regards,
Asaf Penso
________________________________
From: Yan, Xiaoping (NSB - CN/Hangzhou) <xiaoping.yan@nokia-sbell.com<mailto:xiaoping.yan@nokia-sbell.com>>
Sent: Monday, September 27, 2021 5:55:21 AM
To: Asaf Penso <asafp@nvidia.com<mailto:asafp@nvidia.com>>
Cc: Slava Ovsiienko <viacheslavo@nvidia.com<mailto:viacheslavo@nvidia.com>>; Matan Azrad <matan@nvidia.com<mailto:matan@nvidia.com>>; Raslan Darawsheh <rasland@nvidia.com<mailto:rasland@nvidia.com>>; Xu, Meng-Maggie (NSB - CN/Hangzhou) <meng-maggie.xu@nokia-sbell.com<mailto:meng-maggie.xu@nokia-sbell.com>>
Subject: RE: mlx5 VF packet lost between rx_port_unicast_packets and rx_good_packets
Hi,
I tried also with testpmd with such command and configuration:
dpdk-testpmd -l "4,5" --legacy-mem --socket-mem "5000,0" -a 0000:03:02.0 -- -i --nb-cores=1 --portmask=0x1 --rxd=512 --txd=512
testpmd> port stop 0
testpmd> vlan set filter on 0
testpmd> rx_vlan add 767 0
testpmd> port start 0
testpmd> set fwd 5tswap
testpmd> start
it only gets 1.4mpps.
with 1.5mpps, it starts to drop packets occasionally.
Best regards
Yan Xiaoping
From: Yan, Xiaoping (NSB - CN/Hangzhou)
Sent: 2021年9月26日 13:19
To: 'Asaf Penso' <asafp@nvidia.com<mailto:asafp@nvidia.com>>
Cc: Slava Ovsiienko <viacheslavo@nvidia.com<mailto:viacheslavo@nvidia.com>>; Matan Azrad <matan@nvidia.com<mailto:matan@nvidia.com>>; Raslan Darawsheh <rasland@nvidia.com<mailto:rasland@nvidia.com>>; Xu, Meng-Maggie (NSB - CN/Hangzhou) <meng-maggie.xu@nokia-sbell.com<mailto:meng-maggie.xu@nokia-sbell.com>>
Subject: RE: mlx5 VF packet lost between rx_port_unicast_packets and rx_good_packets
Hi,
I was using 6wind fastpath instead of testpmd.
>> Do you configure any flow?
I think not, but is there any command to check?
>> Do you work in isolate mode?
Do you mean the CPU?
The dpdk application (6wind fastpath) run inside container and it is using CPU core from exclusive pool<https://github.com/nokia/CPU-Pooler>
On the otherhand, the cpu isolation is done by host infrastructure and a bit complicated, I’m not sure if there is really no any other task run in this core.
BTW, we recently switched the host infra to redhat openshift container platform, and same problem is there…
We can get 1.6mpps with intel 810 NIC, but we can only gets 1mpps for mlx.
I raised also a ticket to mellanox Support
https://support.mellanox.com/s/case/5001T00001ZC0jzQAD
There is log about cpu affinity, and some mlx5_xxx threads seems strange to me…
Can you please also check the ticket?
Best regards
Yan Xiaoping
From: Asaf Penso <asafp@nvidia.com<mailto:asafp@nvidia.com>>
Sent: 2021年9月26日 12:57
To: Yan, Xiaoping (NSB - CN/Hangzhou) <xiaoping.yan@nokia-sbell.com<mailto:xiaoping.yan@nokia-sbell.com>>
Cc: Slava Ovsiienko <viacheslavo@nvidia.com<mailto:viacheslavo@nvidia.com>>; Matan Azrad <matan@nvidia.com<mailto:matan@nvidia.com>>; Raslan Darawsheh <rasland@nvidia.com<mailto:rasland@nvidia.com>>
Subject: RE: mlx5 VF packet lost between rx_port_unicast_packets and rx_good_packets
Hi,
Could you please share the testpmd command line you are using?
Do you configure any flow? Do you work in isolate mode?
Regards,
Asaf Penso
From: Yan, Xiaoping (NSB - CN/Hangzhou) <xiaoping.yan@nokia-sbell.com<mailto:xiaoping.yan@nokia-sbell.com>>
Sent: Monday, July 26, 2021 7:52 AM
To: Asaf Penso <asafp@nvidia.com<mailto:asafp@nvidia.com>>; users@dpdk.org<mailto:users@dpdk.org>
Cc: Slava Ovsiienko <viacheslavo@nvidia.com<mailto:viacheslavo@nvidia.com>>; Matan Azrad <matan@nvidia.com<mailto:matan@nvidia.com>>; Raslan Darawsheh <rasland@nvidia.com<mailto:rasland@nvidia.com>>
Subject: RE: mlx5 VF packet lost between rx_port_unicast_packets and rx_good_packets
Hi,
dpdk version in use is 19.11
I have not tried with latest upstream version.
It seems performance is affected by IPv6 neighbor advertisement packets coming to this interface
05:20:04.025290 IP6 fe80::6cf1:9fff:fe4e:8a01 > ff02::1: ICMP6, neighbor advertisement, tgt is fe80::6cf1:9fff:fe4e:8a01, length 32
0x0000: 3333 0000 0001 6ef1 9f4e 8a01 86dd 6008
0x0010: fe44 0020 3aff fe80 0000 0000 0000 6cf1
0x0020: 9fff fe4e 8a01 ff02 0000 0000 0000 0000
0x0030: 0000 0000 0001 8800 96d9 2000 0000 fe80
0x0040: 0000 0000 0000 6cf1 9fff fe4e 8a01 0201
0x0050: 6ef1 9f4e 8a01
Somehow, there are about 100 such packets per second coming to the interface, and packet loss happens.
When we change default vlan in switch so that there is no such packets come to the interface (the mlx5 VF under test), there is not packet loss anymore.
In both cases, all packets have arrived to rx_vport_unicast_packets.
In the packet loss case, we see less packets in rx_good_packets (rx_vport_unicast_packets = rx_good_packets + lost packet).
If the dpdk application is too slow to receive all packets from the VF, is there any counter to indicate this?
Any suggestion?
Thank you.
Best regards
Yan Xiaoping
-----Original Message-----
From: Asaf Penso <asafp@nvidia.com<mailto:asafp@nvidia.com>>
Sent: 2021年7月13日 20:36
To: Yan, Xiaoping (NSB - CN/Hangzhou) <xiaoping.yan@nokia-sbell.com<mailto:xiaoping.yan@nokia-sbell.com>>; users@dpdk.org<mailto:users@dpdk.org>
Cc: Slava Ovsiienko <viacheslavo@nvidia.com<mailto:viacheslavo@nvidia.com>>; Matan Azrad <matan@nvidia.com<mailto:matan@nvidia.com>>; Raslan Darawsheh <rasland@nvidia.com<mailto:rasland@nvidia.com>>
Subject: RE: mlx5 VF packet lost between rx_port_unicast_packets and rx_good_packets
Hello Yan,
Can you please mention which DPDK version you use and whether you see this issue also with latest upstream version?
Regards,
Asaf Penso
>-----Original Message-----
>From: users <users-bounces@dpdk.org<mailto:users-bounces@dpdk.org>> On Behalf Of Yan, Xiaoping (NSB -
>CN/Hangzhou)
>Sent: Monday, July 5, 2021 1:08 PM
>To: users@dpdk.org<mailto:users@dpdk.org>
>Subject: [dpdk-users] mlx5 VF packet lost between
>rx_port_unicast_packets and rx_good_packets
>
>Hi,
>
>When doing traffic loopback test on a mlx5 VF, we found there are some
>packet loss (not all packet received back ).
>
>From xstats counters, I found all packets have been received in
>rx_port_unicast_packets, but rx_good_packets has lower counter, and
>rx_port_unicast_packets - rx_good_packets = lost packets i.e. packet
>lost between rx_port_unicast_packets and rx_good_packets.
>But I can not find any other counter indicating where exactly those
>packets are lost.
>
>Any idea?
>
>Attached is the counter logs. (bf is before the test, af is after the
>test, fp-cli dpdk-port-stats is the command used to get xstats, and
>ethtool -S _f1 (the vf
>used) also printed) Test equipment reports that it sends: 2911176
>packets,
>receives: 2909474, dropped: 1702 And the xstats (after - before) shows
>rx_port_unicast_packets 2911177, rx_good_packets 2909475, so drop
>(2911177 - rx_good_packets) is 1702
>
>BTW, I also noticed this discussion "packet loss between phy and good
>counter"
>http://mails.dpdk.org/archives/users/2018-July/003271.html
>but my case seems to be different as packet also received in
>rx_port_unicast_packets, and I checked counter from pf (ethtool -S
>ens1f0 in attached log), rx_discards_phy is not increasing.
>
>Thank you.
>
>Best regards
>Yan Xiaoping
[-- Attachment #2: Type: text/html, Size: 71350 bytes --]
^ permalink raw reply [flat|nested] 19+ messages in thread
* RE: mlx5 VF packet lost between rx_port_unicast_packets and rx_good_packets
2021-10-14 9:50 ` Asaf Penso
@ 2021-10-14 10:15 ` Yan, Xiaoping (NSB - CN/Hangzhou)
2021-10-14 11:48 ` Asaf Penso
0 siblings, 1 reply; 19+ messages in thread
From: Yan, Xiaoping (NSB - CN/Hangzhou) @ 2021-10-14 10:15 UTC (permalink / raw)
To: Asaf Penso, users; +Cc: Slava Ovsiienko, Matan Azrad, Raslan Darawsheh
Hi,
Ok, I will try. (probably some days later as I’m busing with another task right now)
Could you also share me the commit id for those fixes?
Thank you.
Best regards
Yan Xiaoping
From: Asaf Penso <asafp@nvidia.com>
Sent: 2021年10月14日 17:51
To: Yan, Xiaoping (NSB - CN/Hangzhou) <xiaoping.yan@nokia-sbell.com>; users@dpdk.org
Cc: Slava Ovsiienko <viacheslavo@nvidia.com>; Matan Azrad <matan@nvidia.com>; Raslan Darawsheh <rasland@nvidia.com>
Subject: RE: mlx5 VF packet lost between rx_port_unicast_packets and rx_good_packets
Can you please try the last LTS 20.11.3?
We have some related fixes and we think the issue is already solved.
Regards,
Asaf Penso
From: Yan, Xiaoping (NSB - CN/Hangzhou) <xiaoping.yan@nokia-sbell.com<mailto:xiaoping.yan@nokia-sbell.com>>
Sent: Thursday, October 14, 2021 12:33 PM
To: Asaf Penso <asafp@nvidia.com<mailto:asafp@nvidia.com>>; users@dpdk.org<mailto:users@dpdk.org>
Cc: Slava Ovsiienko <viacheslavo@nvidia.com<mailto:viacheslavo@nvidia.com>>; Matan Azrad <matan@nvidia.com<mailto:matan@nvidia.com>>; Raslan Darawsheh <rasland@nvidia.com<mailto:rasland@nvidia.com>>
Subject: RE: mlx5 VF packet lost between rx_port_unicast_packets and rx_good_packets
Hi,
I’m using 20.11
commit b1d36cf828771e28eb0130b59dcf606c2a0bc94d (HEAD, tag: v20.11)
Author: Thomas Monjalon <thomas@monjalon.net<mailto:thomas@monjalon.net>>
Date: Fri Nov 27 19:48:48 2020 +0100
version: 20.11.0
Best regards
Yan Xiaoping
From: Asaf Penso <asafp@nvidia.com<mailto:asafp@nvidia.com>>
Sent: 2021年10月14日 14:56
To: Yan, Xiaoping (NSB - CN/Hangzhou) <xiaoping.yan@nokia-sbell.com<mailto:xiaoping.yan@nokia-sbell.com>>; users@dpdk.org<mailto:users@dpdk.org>
Cc: Slava Ovsiienko <viacheslavo@nvidia.com<mailto:viacheslavo@nvidia.com>>; Matan Azrad <matan@nvidia.com<mailto:matan@nvidia.com>>; Raslan Darawsheh <rasland@nvidia.com<mailto:rasland@nvidia.com>>
Subject: RE: mlx5 VF packet lost between rx_port_unicast_packets and rx_good_packets
Are you using the latest stable 20.11.3? If not, can you try?
Regards,
Asaf Penso
From: Yan, Xiaoping (NSB - CN/Hangzhou) <xiaoping.yan@nokia-sbell.com<mailto:xiaoping.yan@nokia-sbell.com>>
Sent: Thursday, September 30, 2021 11:05 AM
To: Asaf Penso <asafp@nvidia.com<mailto:asafp@nvidia.com>>; users@dpdk.org<mailto:users@dpdk.org>
Cc: Slava Ovsiienko <viacheslavo@nvidia.com<mailto:viacheslavo@nvidia.com>>; Matan Azrad <matan@nvidia.com<mailto:matan@nvidia.com>>; Raslan Darawsheh <rasland@nvidia.com<mailto:rasland@nvidia.com>>
Subject: RE: mlx5 VF packet lost between rx_port_unicast_packets and rx_good_packets
Hi,
In below log, we can clearly see packets are dropped between counter rx_unicast_packets and rx_good_packets
But there is not any error/miss counter tell why/where packet is dropped.
Is this a known bug/limitation of Mellanox card?
Any suggestion?
Counter in test center(traffic generator):
Tx count: 617496152
Rx count: 617475672
Drop: 20480
testpmd started with:
dpdk-testpmd -l "2,3" --legacy-mem --socket-mem "5000,0" -a 0000:03:07.0 -- -i --nb-cores=1 --portmask=0x1 --rxd=512 --txd=512
testpmd> port stop 0
testpmd> vlan set filter on 0
testpmd> rx_vlan add 767 0
testpmd> port start 0
testpmd> set fwd 5tswap
testpmd> start
testpmd> show fwd stats all
---------------------- Forward statistics for port 0 ----------------------
RX-packets: 617475727 RX-dropped: 0 RX-total: 617475727
TX-packets: 617475727 TX-dropped: 0 TX-total: 617475727
----------------------------------------------------------------------------
+++++++++++++++ Accumulated forward statistics for all ports+++++++++++++++
RX-packets: 617475727 RX-dropped: 0 RX-total: 617475727
TX-packets: 617475727 TX-dropped: 0 TX-total: 617475727
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
testpmd> show port xstats 0
###### NIC extended statistics for port 0
rx_good_packets: 617475731
tx_good_packets: 617475730
rx_good_bytes: 45693207378
tx_good_bytes: 45693207036
rx_missed_errors: 0
rx_errors: 0
tx_errors: 0
rx_mbuf_allocation_errors: 0
rx_q0_packets: 617475731
rx_q0_bytes: 45693207378
rx_q0_errors: 0
tx_q0_packets: 617475730
tx_q0_bytes: 45693207036
rx_wqe_errors: 0
rx_unicast_packets: 617496152
rx_unicast_bytes: 45694715248
tx_unicast_packets: 617475730
tx_unicast_bytes: 45693207036
rx_multicast_packets: 3
rx_multicast_bytes: 342
tx_multicast_packets: 0
tx_multicast_bytes: 0
rx_broadcast_packets: 56
rx_broadcast_bytes: 7308
tx_broadcast_packets: 0
tx_broadcast_bytes: 0
tx_phy_packets: 0
rx_phy_packets: 0
rx_phy_crc_errors: 0
tx_phy_bytes: 0
rx_phy_bytes: 0
rx_phy_in_range_len_errors: 0
rx_phy_symbol_errors: 0
rx_phy_discard_packets: 0
tx_phy_discard_packets: 0
tx_phy_errors: 0
rx_out_of_buffer: 0
tx_pp_missed_interrupt_errors: 0
tx_pp_rearm_queue_errors: 0
tx_pp_clock_queue_errors: 0
tx_pp_timestamp_past_errors: 0
tx_pp_timestamp_future_errors: 0
tx_pp_jitter: 0
tx_pp_wander: 0
tx_pp_sync_lost: 0
Best regards
Yan Xiaoping
From: Yan, Xiaoping (NSB - CN/Hangzhou)
Sent: 2021年9月29日 16:26
To: 'Asaf Penso' <asafp@nvidia.com<mailto:asafp@nvidia.com>>
Cc: 'Slava Ovsiienko' <viacheslavo@nvidia.com<mailto:viacheslavo@nvidia.com>>; 'Matan Azrad' <matan@nvidia.com<mailto:matan@nvidia.com>>; 'Raslan Darawsheh' <rasland@nvidia.com<mailto:rasland@nvidia.com>>; Xu, Meng-Maggie (NSB - CN/Hangzhou) <meng-maggie.xu@nokia-sbell.com<mailto:meng-maggie.xu@nokia-sbell.com>>
Subject: RE: mlx5 VF packet lost between rx_port_unicast_packets and rx_good_packets
Hi,
We replaced the NIC also (originally it was cx-4, now it is cx-5), but result is the same.
Do you know why the packet is dropped between rx_port_unicast_packets and rx_good_packets, but there is no error/miss counter?
And do you know mlx5_xxx kernel thread?
They have cpu affinity to all cpu cores, including the core used by fastpath/testpmd.
Would it affect?
[cranuser1@hztt24f-rm17-ocp-sno-1 ~]$ taskset -cp 74548
pid 74548's current affinity list: 0-27
[cranuser1@hztt24f-rm17-ocp-sno-1 ~]$ ps -emo pid,tid,psr,comm | grep mlx5
903 - - mlx5_health0000
904 - - mlx5_page_alloc
907 - - mlx5_cmd_0000:0
916 - - mlx5_events
917 - - mlx5_esw_wq
918 - - mlx5_fw_tracer
919 - - mlx5_hv_vhca
921 - - mlx5_fc
924 - - mlx5_health0000
925 - - mlx5_page_alloc
927 - - mlx5_cmd_0000:0
935 - - mlx5_events
936 - - mlx5_esw_wq
937 - - mlx5_fw_tracer
938 - - mlx5_hv_vhca
939 - - mlx5_fc
941 - - mlx5_health0000
942 - - mlx5_page_alloc
Best regards
Yan Xiaoping
From: Yan, Xiaoping (NSB - CN/Hangzhou)
Sent: 2021年9月29日 15:03
To: 'Asaf Penso' <asafp@nvidia.com<mailto:asafp@nvidia.com>>
Cc: Slava Ovsiienko <viacheslavo@nvidia.com<mailto:viacheslavo@nvidia.com>>; Matan Azrad <matan@nvidia.com<mailto:matan@nvidia.com>>; Raslan Darawsheh <rasland@nvidia.com<mailto:rasland@nvidia.com>>; Xu, Meng-Maggie (NSB - CN/Hangzhou) <meng-maggie.xu@nokia-sbell.com<mailto:meng-maggie.xu@nokia-sbell.com>>
Subject: RE: mlx5 VF packet lost between rx_port_unicast_packets and rx_good_packets
Hi,
It is 20.11 (We upgraded to 20.11 recently).
Best regards
Yan Xiaoping
From: Asaf Penso <asafp@nvidia.com<mailto:asafp@nvidia.com>>
Sent: 2021年9月29日 14:47
To: Yan, Xiaoping (NSB - CN/Hangzhou) <xiaoping.yan@nokia-sbell.com<mailto:xiaoping.yan@nokia-sbell.com>>
Cc: Slava Ovsiienko <viacheslavo@nvidia.com<mailto:viacheslavo@nvidia.com>>; Matan Azrad <matan@nvidia.com<mailto:matan@nvidia.com>>; Raslan Darawsheh <rasland@nvidia.com<mailto:rasland@nvidia.com>>; Xu, Meng-Maggie (NSB - CN/Hangzhou) <meng-maggie.xu@nokia-sbell.com<mailto:meng-maggie.xu@nokia-sbell.com>>
Subject: Re: mlx5 VF packet lost between rx_port_unicast_packets and rx_good_packets
What dpdk version are you using?
19.11 doesn't support 5tswap mode in testpmd.
Regards,
Asaf Penso
________________________________
From: Yan, Xiaoping (NSB - CN/Hangzhou) <xiaoping.yan@nokia-sbell.com<mailto:xiaoping.yan@nokia-sbell.com>>
Sent: Monday, September 27, 2021 5:55:21 AM
To: Asaf Penso <asafp@nvidia.com<mailto:asafp@nvidia.com>>
Cc: Slava Ovsiienko <viacheslavo@nvidia.com<mailto:viacheslavo@nvidia.com>>; Matan Azrad <matan@nvidia.com<mailto:matan@nvidia.com>>; Raslan Darawsheh <rasland@nvidia.com<mailto:rasland@nvidia.com>>; Xu, Meng-Maggie (NSB - CN/Hangzhou) <meng-maggie.xu@nokia-sbell.com<mailto:meng-maggie.xu@nokia-sbell.com>>
Subject: RE: mlx5 VF packet lost between rx_port_unicast_packets and rx_good_packets
Hi,
I tried also with testpmd with such command and configuration:
dpdk-testpmd -l "4,5" --legacy-mem --socket-mem "5000,0" -a 0000:03:02.0 -- -i --nb-cores=1 --portmask=0x1 --rxd=512 --txd=512
testpmd> port stop 0
testpmd> vlan set filter on 0
testpmd> rx_vlan add 767 0
testpmd> port start 0
testpmd> set fwd 5tswap
testpmd> start
it only gets 1.4mpps.
with 1.5mpps, it starts to drop packets occasionally.
Best regards
Yan Xiaoping
From: Yan, Xiaoping (NSB - CN/Hangzhou)
Sent: 2021年9月26日 13:19
To: 'Asaf Penso' <asafp@nvidia.com<mailto:asafp@nvidia.com>>
Cc: Slava Ovsiienko <viacheslavo@nvidia.com<mailto:viacheslavo@nvidia.com>>; Matan Azrad <matan@nvidia.com<mailto:matan@nvidia.com>>; Raslan Darawsheh <rasland@nvidia.com<mailto:rasland@nvidia.com>>; Xu, Meng-Maggie (NSB - CN/Hangzhou) <meng-maggie.xu@nokia-sbell.com<mailto:meng-maggie.xu@nokia-sbell.com>>
Subject: RE: mlx5 VF packet lost between rx_port_unicast_packets and rx_good_packets
Hi,
I was using 6wind fastpath instead of testpmd.
>> Do you configure any flow?
I think not, but is there any command to check?
>> Do you work in isolate mode?
Do you mean the CPU?
The dpdk application (6wind fastpath) run inside container and it is using CPU core from exclusive pool<https://github.com/nokia/CPU-Pooler>
On the otherhand, the cpu isolation is done by host infrastructure and a bit complicated, I’m not sure if there is really no any other task run in this core.
BTW, we recently switched the host infra to redhat openshift container platform, and same problem is there…
We can get 1.6mpps with intel 810 NIC, but we can only gets 1mpps for mlx.
I raised also a ticket to mellanox Support
https://support.mellanox.com/s/case/5001T00001ZC0jzQAD
There is log about cpu affinity, and some mlx5_xxx threads seems strange to me…
Can you please also check the ticket?
Best regards
Yan Xiaoping
From: Asaf Penso <asafp@nvidia.com<mailto:asafp@nvidia.com>>
Sent: 2021年9月26日 12:57
To: Yan, Xiaoping (NSB - CN/Hangzhou) <xiaoping.yan@nokia-sbell.com<mailto:xiaoping.yan@nokia-sbell.com>>
Cc: Slava Ovsiienko <viacheslavo@nvidia.com<mailto:viacheslavo@nvidia.com>>; Matan Azrad <matan@nvidia.com<mailto:matan@nvidia.com>>; Raslan Darawsheh <rasland@nvidia.com<mailto:rasland@nvidia.com>>
Subject: RE: mlx5 VF packet lost between rx_port_unicast_packets and rx_good_packets
Hi,
Could you please share the testpmd command line you are using?
Do you configure any flow? Do you work in isolate mode?
Regards,
Asaf Penso
From: Yan, Xiaoping (NSB - CN/Hangzhou) <xiaoping.yan@nokia-sbell.com<mailto:xiaoping.yan@nokia-sbell.com>>
Sent: Monday, July 26, 2021 7:52 AM
To: Asaf Penso <asafp@nvidia.com<mailto:asafp@nvidia.com>>; users@dpdk.org<mailto:users@dpdk.org>
Cc: Slava Ovsiienko <viacheslavo@nvidia.com<mailto:viacheslavo@nvidia.com>>; Matan Azrad <matan@nvidia.com<mailto:matan@nvidia.com>>; Raslan Darawsheh <rasland@nvidia.com<mailto:rasland@nvidia.com>>
Subject: RE: mlx5 VF packet lost between rx_port_unicast_packets and rx_good_packets
Hi,
dpdk version in use is 19.11
I have not tried with latest upstream version.
It seems performance is affected by IPv6 neighbor advertisement packets coming to this interface
05:20:04.025290 IP6 fe80::6cf1:9fff:fe4e:8a01 > ff02::1: ICMP6, neighbor advertisement, tgt is fe80::6cf1:9fff:fe4e:8a01, length 32
0x0000: 3333 0000 0001 6ef1 9f4e 8a01 86dd 6008
0x0010: fe44 0020 3aff fe80 0000 0000 0000 6cf1
0x0020: 9fff fe4e 8a01 ff02 0000 0000 0000 0000
0x0030: 0000 0000 0001 8800 96d9 2000 0000 fe80
0x0040: 0000 0000 0000 6cf1 9fff fe4e 8a01 0201
0x0050: 6ef1 9f4e 8a01
Somehow, there are about 100 such packets per second coming to the interface, and packet loss happens.
When we change default vlan in switch so that there is no such packets come to the interface (the mlx5 VF under test), there is not packet loss anymore.
In both cases, all packets have arrived to rx_vport_unicast_packets.
In the packet loss case, we see less packets in rx_good_packets (rx_vport_unicast_packets = rx_good_packets + lost packet).
If the dpdk application is too slow to receive all packets from the VF, is there any counter to indicate this?
Any suggestion?
Thank you.
Best regards
Yan Xiaoping
-----Original Message-----
From: Asaf Penso <asafp@nvidia.com<mailto:asafp@nvidia.com>>
Sent: 2021年7月13日 20:36
To: Yan, Xiaoping (NSB - CN/Hangzhou) <xiaoping.yan@nokia-sbell.com<mailto:xiaoping.yan@nokia-sbell.com>>; users@dpdk.org<mailto:users@dpdk.org>
Cc: Slava Ovsiienko <viacheslavo@nvidia.com<mailto:viacheslavo@nvidia.com>>; Matan Azrad <matan@nvidia.com<mailto:matan@nvidia.com>>; Raslan Darawsheh <rasland@nvidia.com<mailto:rasland@nvidia.com>>
Subject: RE: mlx5 VF packet lost between rx_port_unicast_packets and rx_good_packets
Hello Yan,
Can you please mention which DPDK version you use and whether you see this issue also with latest upstream version?
Regards,
Asaf Penso
>-----Original Message-----
>From: users <users-bounces@dpdk.org<mailto:users-bounces@dpdk.org>> On Behalf Of Yan, Xiaoping (NSB -
>CN/Hangzhou)
>Sent: Monday, July 5, 2021 1:08 PM
>To: users@dpdk.org<mailto:users@dpdk.org>
>Subject: [dpdk-users] mlx5 VF packet lost between
>rx_port_unicast_packets and rx_good_packets
>
>Hi,
>
>When doing traffic loopback test on a mlx5 VF, we found there are some
>packet loss (not all packet received back ).
>
>From xstats counters, I found all packets have been received in
>rx_port_unicast_packets, but rx_good_packets has lower counter, and
>rx_port_unicast_packets - rx_good_packets = lost packets i.e. packet
>lost between rx_port_unicast_packets and rx_good_packets.
>But I can not find any other counter indicating where exactly those
>packets are lost.
>
>Any idea?
>
>Attached is the counter logs. (bf is before the test, af is after the
>test, fp-cli dpdk-port-stats is the command used to get xstats, and
>ethtool -S _f1 (the vf
>used) also printed) Test equipment reports that it sends: 2911176
>packets,
>receives: 2909474, dropped: 1702 And the xstats (after - before) shows
>rx_port_unicast_packets 2911177, rx_good_packets 2909475, so drop
>(2911177 - rx_good_packets) is 1702
>
>BTW, I also noticed this discussion "packet loss between phy and good
>counter"
>http://mails.dpdk.org/archives/users/2018-July/003271.html
>but my case seems to be different as packet also received in
>rx_port_unicast_packets, and I checked counter from pf (ethtool -S
>ens1f0 in attached log), rx_discards_phy is not increasing.
>
>Thank you.
>
>Best regards
>Yan Xiaoping
^ permalink raw reply [flat|nested] 19+ messages in thread
* RE: mlx5 VF packet lost between rx_port_unicast_packets and rx_good_packets
2021-10-14 10:15 ` Yan, Xiaoping (NSB - CN/Hangzhou)
@ 2021-10-14 11:48 ` Asaf Penso
2021-10-18 9:28 ` Yan, Xiaoping (NSB - CN/Hangzhou)
0 siblings, 1 reply; 19+ messages in thread
From: Asaf Penso @ 2021-10-14 11:48 UTC (permalink / raw)
To: Yan, Xiaoping (NSB - CN/Hangzhou), users
Cc: Slava Ovsiienko, Matan Azrad, Raslan Darawsheh
[-- Attachment #1: Type: text/plain, Size: 16825 bytes --]
This is the commit id which we think solves the issue you see:
https://git.dpdk.org/dpdk-stable/commit/?h=v20.11.3&id=ede02cfc4783446c4068a5a1746f045465364fac
Regards,
Asaf Penso
From: Yan, Xiaoping (NSB - CN/Hangzhou) <xiaoping.yan@nokia-sbell.com>
Sent: Thursday, October 14, 2021 1:15 PM
To: Asaf Penso <asafp@nvidia.com>; users@dpdk.org
Cc: Slava Ovsiienko <viacheslavo@nvidia.com>; Matan Azrad <matan@nvidia.com>; Raslan Darawsheh <rasland@nvidia.com>
Subject: RE: mlx5 VF packet lost between rx_port_unicast_packets and rx_good_packets
Hi,
Ok, I will try. (probably some days later as I’m busing with another task right now)
Could you also share me the commit id for those fixes?
Thank you.
Best regards
Yan Xiaoping
From: Asaf Penso <asafp@nvidia.com<mailto:asafp@nvidia.com>>
Sent: 2021年10月14日 17:51
To: Yan, Xiaoping (NSB - CN/Hangzhou) <xiaoping.yan@nokia-sbell.com<mailto:xiaoping.yan@nokia-sbell.com>>; users@dpdk.org<mailto:users@dpdk.org>
Cc: Slava Ovsiienko <viacheslavo@nvidia.com<mailto:viacheslavo@nvidia.com>>; Matan Azrad <matan@nvidia.com<mailto:matan@nvidia.com>>; Raslan Darawsheh <rasland@nvidia.com<mailto:rasland@nvidia.com>>
Subject: RE: mlx5 VF packet lost between rx_port_unicast_packets and rx_good_packets
Can you please try the last LTS 20.11.3?
We have some related fixes and we think the issue is already solved.
Regards,
Asaf Penso
From: Yan, Xiaoping (NSB - CN/Hangzhou) <xiaoping.yan@nokia-sbell.com<mailto:xiaoping.yan@nokia-sbell.com>>
Sent: Thursday, October 14, 2021 12:33 PM
To: Asaf Penso <asafp@nvidia.com<mailto:asafp@nvidia.com>>; users@dpdk.org<mailto:users@dpdk.org>
Cc: Slava Ovsiienko <viacheslavo@nvidia.com<mailto:viacheslavo@nvidia.com>>; Matan Azrad <matan@nvidia.com<mailto:matan@nvidia.com>>; Raslan Darawsheh <rasland@nvidia.com<mailto:rasland@nvidia.com>>
Subject: RE: mlx5 VF packet lost between rx_port_unicast_packets and rx_good_packets
Hi,
I’m using 20.11
commit b1d36cf828771e28eb0130b59dcf606c2a0bc94d (HEAD, tag: v20.11)
Author: Thomas Monjalon <thomas@monjalon.net<mailto:thomas@monjalon.net>>
Date: Fri Nov 27 19:48:48 2020 +0100
version: 20.11.0
Best regards
Yan Xiaoping
From: Asaf Penso <asafp@nvidia.com<mailto:asafp@nvidia.com>>
Sent: 2021年10月14日 14:56
To: Yan, Xiaoping (NSB - CN/Hangzhou) <xiaoping.yan@nokia-sbell.com<mailto:xiaoping.yan@nokia-sbell.com>>; users@dpdk.org<mailto:users@dpdk.org>
Cc: Slava Ovsiienko <viacheslavo@nvidia.com<mailto:viacheslavo@nvidia.com>>; Matan Azrad <matan@nvidia.com<mailto:matan@nvidia.com>>; Raslan Darawsheh <rasland@nvidia.com<mailto:rasland@nvidia.com>>
Subject: RE: mlx5 VF packet lost between rx_port_unicast_packets and rx_good_packets
Are you using the latest stable 20.11.3? If not, can you try?
Regards,
Asaf Penso
From: Yan, Xiaoping (NSB - CN/Hangzhou) <xiaoping.yan@nokia-sbell.com<mailto:xiaoping.yan@nokia-sbell.com>>
Sent: Thursday, September 30, 2021 11:05 AM
To: Asaf Penso <asafp@nvidia.com<mailto:asafp@nvidia.com>>; users@dpdk.org<mailto:users@dpdk.org>
Cc: Slava Ovsiienko <viacheslavo@nvidia.com<mailto:viacheslavo@nvidia.com>>; Matan Azrad <matan@nvidia.com<mailto:matan@nvidia.com>>; Raslan Darawsheh <rasland@nvidia.com<mailto:rasland@nvidia.com>>
Subject: RE: mlx5 VF packet lost between rx_port_unicast_packets and rx_good_packets
Hi,
In below log, we can clearly see packets are dropped between counter rx_unicast_packets and rx_good_packets
But there is not any error/miss counter tell why/where packet is dropped.
Is this a known bug/limitation of Mellanox card?
Any suggestion?
Counter in test center(traffic generator):
Tx count: 617496152
Rx count: 617475672
Drop: 20480
testpmd started with:
dpdk-testpmd -l "2,3" --legacy-mem --socket-mem "5000,0" -a 0000:03:07.0 -- -i --nb-cores=1 --portmask=0x1 --rxd=512 --txd=512
testpmd> port stop 0
testpmd> vlan set filter on 0
testpmd> rx_vlan add 767 0
testpmd> port start 0
testpmd> set fwd 5tswap
testpmd> start
testpmd> show fwd stats all
---------------------- Forward statistics for port 0 ----------------------
RX-packets: 617475727 RX-dropped: 0 RX-total: 617475727
TX-packets: 617475727 TX-dropped: 0 TX-total: 617475727
----------------------------------------------------------------------------
+++++++++++++++ Accumulated forward statistics for all ports+++++++++++++++
RX-packets: 617475727 RX-dropped: 0 RX-total: 617475727
TX-packets: 617475727 TX-dropped: 0 TX-total: 617475727
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
testpmd> show port xstats 0
###### NIC extended statistics for port 0
rx_good_packets: 617475731
tx_good_packets: 617475730
rx_good_bytes: 45693207378
tx_good_bytes: 45693207036
rx_missed_errors: 0
rx_errors: 0
tx_errors: 0
rx_mbuf_allocation_errors: 0
rx_q0_packets: 617475731
rx_q0_bytes: 45693207378
rx_q0_errors: 0
tx_q0_packets: 617475730
tx_q0_bytes: 45693207036
rx_wqe_errors: 0
rx_unicast_packets: 617496152
rx_unicast_bytes: 45694715248
tx_unicast_packets: 617475730
tx_unicast_bytes: 45693207036
rx_multicast_packets: 3
rx_multicast_bytes: 342
tx_multicast_packets: 0
tx_multicast_bytes: 0
rx_broadcast_packets: 56
rx_broadcast_bytes: 7308
tx_broadcast_packets: 0
tx_broadcast_bytes: 0
tx_phy_packets: 0
rx_phy_packets: 0
rx_phy_crc_errors: 0
tx_phy_bytes: 0
rx_phy_bytes: 0
rx_phy_in_range_len_errors: 0
rx_phy_symbol_errors: 0
rx_phy_discard_packets: 0
tx_phy_discard_packets: 0
tx_phy_errors: 0
rx_out_of_buffer: 0
tx_pp_missed_interrupt_errors: 0
tx_pp_rearm_queue_errors: 0
tx_pp_clock_queue_errors: 0
tx_pp_timestamp_past_errors: 0
tx_pp_timestamp_future_errors: 0
tx_pp_jitter: 0
tx_pp_wander: 0
tx_pp_sync_lost: 0
Best regards
Yan Xiaoping
From: Yan, Xiaoping (NSB - CN/Hangzhou)
Sent: 2021年9月29日 16:26
To: 'Asaf Penso' <asafp@nvidia.com<mailto:asafp@nvidia.com>>
Cc: 'Slava Ovsiienko' <viacheslavo@nvidia.com<mailto:viacheslavo@nvidia.com>>; 'Matan Azrad' <matan@nvidia.com<mailto:matan@nvidia.com>>; 'Raslan Darawsheh' <rasland@nvidia.com<mailto:rasland@nvidia.com>>; Xu, Meng-Maggie (NSB - CN/Hangzhou) <meng-maggie.xu@nokia-sbell.com<mailto:meng-maggie.xu@nokia-sbell.com>>
Subject: RE: mlx5 VF packet lost between rx_port_unicast_packets and rx_good_packets
Hi,
We replaced the NIC also (originally it was cx-4, now it is cx-5), but result is the same.
Do you know why the packet is dropped between rx_port_unicast_packets and rx_good_packets, but there is no error/miss counter?
And do you know mlx5_xxx kernel thread?
They have cpu affinity to all cpu cores, including the core used by fastpath/testpmd.
Would it affect?
[cranuser1@hztt24f-rm17-ocp-sno-1 ~]$ taskset -cp 74548
pid 74548's current affinity list: 0-27
[cranuser1@hztt24f-rm17-ocp-sno-1 ~]$ ps -emo pid,tid,psr,comm | grep mlx5
903 - - mlx5_health0000
904 - - mlx5_page_alloc
907 - - mlx5_cmd_0000:0
916 - - mlx5_events
917 - - mlx5_esw_wq
918 - - mlx5_fw_tracer
919 - - mlx5_hv_vhca
921 - - mlx5_fc
924 - - mlx5_health0000
925 - - mlx5_page_alloc
927 - - mlx5_cmd_0000:0
935 - - mlx5_events
936 - - mlx5_esw_wq
937 - - mlx5_fw_tracer
938 - - mlx5_hv_vhca
939 - - mlx5_fc
941 - - mlx5_health0000
942 - - mlx5_page_alloc
Best regards
Yan Xiaoping
From: Yan, Xiaoping (NSB - CN/Hangzhou)
Sent: 2021年9月29日 15:03
To: 'Asaf Penso' <asafp@nvidia.com<mailto:asafp@nvidia.com>>
Cc: Slava Ovsiienko <viacheslavo@nvidia.com<mailto:viacheslavo@nvidia.com>>; Matan Azrad <matan@nvidia.com<mailto:matan@nvidia.com>>; Raslan Darawsheh <rasland@nvidia.com<mailto:rasland@nvidia.com>>; Xu, Meng-Maggie (NSB - CN/Hangzhou) <meng-maggie.xu@nokia-sbell.com<mailto:meng-maggie.xu@nokia-sbell.com>>
Subject: RE: mlx5 VF packet lost between rx_port_unicast_packets and rx_good_packets
Hi,
It is 20.11 (We upgraded to 20.11 recently).
Best regards
Yan Xiaoping
From: Asaf Penso <asafp@nvidia.com<mailto:asafp@nvidia.com>>
Sent: 2021年9月29日 14:47
To: Yan, Xiaoping (NSB - CN/Hangzhou) <xiaoping.yan@nokia-sbell.com<mailto:xiaoping.yan@nokia-sbell.com>>
Cc: Slava Ovsiienko <viacheslavo@nvidia.com<mailto:viacheslavo@nvidia.com>>; Matan Azrad <matan@nvidia.com<mailto:matan@nvidia.com>>; Raslan Darawsheh <rasland@nvidia.com<mailto:rasland@nvidia.com>>; Xu, Meng-Maggie (NSB - CN/Hangzhou) <meng-maggie.xu@nokia-sbell.com<mailto:meng-maggie.xu@nokia-sbell.com>>
Subject: Re: mlx5 VF packet lost between rx_port_unicast_packets and rx_good_packets
What dpdk version are you using?
19.11 doesn't support 5tswap mode in testpmd.
Regards,
Asaf Penso
________________________________
From: Yan, Xiaoping (NSB - CN/Hangzhou) <xiaoping.yan@nokia-sbell.com<mailto:xiaoping.yan@nokia-sbell.com>>
Sent: Monday, September 27, 2021 5:55:21 AM
To: Asaf Penso <asafp@nvidia.com<mailto:asafp@nvidia.com>>
Cc: Slava Ovsiienko <viacheslavo@nvidia.com<mailto:viacheslavo@nvidia.com>>; Matan Azrad <matan@nvidia.com<mailto:matan@nvidia.com>>; Raslan Darawsheh <rasland@nvidia.com<mailto:rasland@nvidia.com>>; Xu, Meng-Maggie (NSB - CN/Hangzhou) <meng-maggie.xu@nokia-sbell.com<mailto:meng-maggie.xu@nokia-sbell.com>>
Subject: RE: mlx5 VF packet lost between rx_port_unicast_packets and rx_good_packets
Hi,
I tried also with testpmd with such command and configuration:
dpdk-testpmd -l "4,5" --legacy-mem --socket-mem "5000,0" -a 0000:03:02.0 -- -i --nb-cores=1 --portmask=0x1 --rxd=512 --txd=512
testpmd> port stop 0
testpmd> vlan set filter on 0
testpmd> rx_vlan add 767 0
testpmd> port start 0
testpmd> set fwd 5tswap
testpmd> start
it only gets 1.4mpps.
with 1.5mpps, it starts to drop packets occasionally.
Best regards
Yan Xiaoping
From: Yan, Xiaoping (NSB - CN/Hangzhou)
Sent: 2021年9月26日 13:19
To: 'Asaf Penso' <asafp@nvidia.com<mailto:asafp@nvidia.com>>
Cc: Slava Ovsiienko <viacheslavo@nvidia.com<mailto:viacheslavo@nvidia.com>>; Matan Azrad <matan@nvidia.com<mailto:matan@nvidia.com>>; Raslan Darawsheh <rasland@nvidia.com<mailto:rasland@nvidia.com>>; Xu, Meng-Maggie (NSB - CN/Hangzhou) <meng-maggie.xu@nokia-sbell.com<mailto:meng-maggie.xu@nokia-sbell.com>>
Subject: RE: mlx5 VF packet lost between rx_port_unicast_packets and rx_good_packets
Hi,
I was using 6wind fastpath instead of testpmd.
>> Do you configure any flow?
I think not, but is there any command to check?
>> Do you work in isolate mode?
Do you mean the CPU?
The dpdk application (6wind fastpath) run inside container and it is using CPU core from exclusive pool<https://github.com/nokia/CPU-Pooler>
On the otherhand, the cpu isolation is done by host infrastructure and a bit complicated, I’m not sure if there is really no any other task run in this core.
BTW, we recently switched the host infra to redhat openshift container platform, and same problem is there…
We can get 1.6mpps with intel 810 NIC, but we can only gets 1mpps for mlx.
I raised also a ticket to mellanox Support
https://support.mellanox.com/s/case/5001T00001ZC0jzQAD
There is log about cpu affinity, and some mlx5_xxx threads seems strange to me…
Can you please also check the ticket?
Best regards
Yan Xiaoping
From: Asaf Penso <asafp@nvidia.com<mailto:asafp@nvidia.com>>
Sent: 2021年9月26日 12:57
To: Yan, Xiaoping (NSB - CN/Hangzhou) <xiaoping.yan@nokia-sbell.com<mailto:xiaoping.yan@nokia-sbell.com>>
Cc: Slava Ovsiienko <viacheslavo@nvidia.com<mailto:viacheslavo@nvidia.com>>; Matan Azrad <matan@nvidia.com<mailto:matan@nvidia.com>>; Raslan Darawsheh <rasland@nvidia.com<mailto:rasland@nvidia.com>>
Subject: RE: mlx5 VF packet lost between rx_port_unicast_packets and rx_good_packets
Hi,
Could you please share the testpmd command line you are using?
Do you configure any flow? Do you work in isolate mode?
Regards,
Asaf Penso
From: Yan, Xiaoping (NSB - CN/Hangzhou) <xiaoping.yan@nokia-sbell.com<mailto:xiaoping.yan@nokia-sbell.com>>
Sent: Monday, July 26, 2021 7:52 AM
To: Asaf Penso <asafp@nvidia.com<mailto:asafp@nvidia.com>>; users@dpdk.org<mailto:users@dpdk.org>
Cc: Slava Ovsiienko <viacheslavo@nvidia.com<mailto:viacheslavo@nvidia.com>>; Matan Azrad <matan@nvidia.com<mailto:matan@nvidia.com>>; Raslan Darawsheh <rasland@nvidia.com<mailto:rasland@nvidia.com>>
Subject: RE: mlx5 VF packet lost between rx_port_unicast_packets and rx_good_packets
Hi,
dpdk version in use is 19.11
I have not tried with latest upstream version.
It seems performance is affected by IPv6 neighbor advertisement packets coming to this interface
05:20:04.025290 IP6 fe80::6cf1:9fff:fe4e:8a01 > ff02::1: ICMP6, neighbor advertisement, tgt is fe80::6cf1:9fff:fe4e:8a01, length 32
0x0000: 3333 0000 0001 6ef1 9f4e 8a01 86dd 6008
0x0010: fe44 0020 3aff fe80 0000 0000 0000 6cf1
0x0020: 9fff fe4e 8a01 ff02 0000 0000 0000 0000
0x0030: 0000 0000 0001 8800 96d9 2000 0000 fe80
0x0040: 0000 0000 0000 6cf1 9fff fe4e 8a01 0201
0x0050: 6ef1 9f4e 8a01
Somehow, there are about 100 such packets per second coming to the interface, and packet loss happens.
When we change default vlan in switch so that there is no such packets come to the interface (the mlx5 VF under test), there is not packet loss anymore.
In both cases, all packets have arrived to rx_vport_unicast_packets.
In the packet loss case, we see less packets in rx_good_packets (rx_vport_unicast_packets = rx_good_packets + lost packet).
If the dpdk application is too slow to receive all packets from the VF, is there any counter to indicate this?
Any suggestion?
Thank you.
Best regards
Yan Xiaoping
-----Original Message-----
From: Asaf Penso <asafp@nvidia.com<mailto:asafp@nvidia.com>>
Sent: 2021年7月13日 20:36
To: Yan, Xiaoping (NSB - CN/Hangzhou) <xiaoping.yan@nokia-sbell.com<mailto:xiaoping.yan@nokia-sbell.com>>; users@dpdk.org<mailto:users@dpdk.org>
Cc: Slava Ovsiienko <viacheslavo@nvidia.com<mailto:viacheslavo@nvidia.com>>; Matan Azrad <matan@nvidia.com<mailto:matan@nvidia.com>>; Raslan Darawsheh <rasland@nvidia.com<mailto:rasland@nvidia.com>>
Subject: RE: mlx5 VF packet lost between rx_port_unicast_packets and rx_good_packets
Hello Yan,
Can you please mention which DPDK version you use and whether you see this issue also with latest upstream version?
Regards,
Asaf Penso
>-----Original Message-----
>From: users <users-bounces@dpdk.org<mailto:users-bounces@dpdk.org>> On Behalf Of Yan, Xiaoping (NSB -
>CN/Hangzhou)
>Sent: Monday, July 5, 2021 1:08 PM
>To: users@dpdk.org<mailto:users@dpdk.org>
>Subject: [dpdk-users] mlx5 VF packet lost between
>rx_port_unicast_packets and rx_good_packets
>
>Hi,
>
>When doing traffic loopback test on a mlx5 VF, we found there are some
>packet loss (not all packet received back ).
>
>From xstats counters, I found all packets have been received in
>rx_port_unicast_packets, but rx_good_packets has lower counter, and
>rx_port_unicast_packets - rx_good_packets = lost packets i.e. packet
>lost between rx_port_unicast_packets and rx_good_packets.
>But I can not find any other counter indicating where exactly those
>packets are lost.
>
>Any idea?
>
>Attached is the counter logs. (bf is before the test, af is after the
>test, fp-cli dpdk-port-stats is the command used to get xstats, and
>ethtool -S _f1 (the vf
>used) also printed) Test equipment reports that it sends: 2911176
>packets,
>receives: 2909474, dropped: 1702 And the xstats (after - before) shows
>rx_port_unicast_packets 2911177, rx_good_packets 2909475, so drop
>(2911177 - rx_good_packets) is 1702
>
>BTW, I also noticed this discussion "packet loss between phy and good
>counter"
>http://mails.dpdk.org/archives/users/2018-July/003271.html
>but my case seems to be different as packet also received in
>rx_port_unicast_packets, and I checked counter from pf (ethtool -S
>ens1f0 in attached log), rx_discards_phy is not increasing.
>
>Thank you.
>
>Best regards
>Yan Xiaoping
[-- Attachment #2: Type: text/html, Size: 76830 bytes --]
^ permalink raw reply [flat|nested] 19+ messages in thread
* RE: mlx5 VF packet lost between rx_port_unicast_packets and rx_good_packets
2021-10-14 11:48 ` Asaf Penso
@ 2021-10-18 9:28 ` Yan, Xiaoping (NSB - CN/Hangzhou)
2021-10-18 10:45 ` David Marchand
0 siblings, 1 reply; 19+ messages in thread
From: Yan, Xiaoping (NSB - CN/Hangzhou) @ 2021-10-18 9:28 UTC (permalink / raw)
To: Asaf Penso, users; +Cc: Slava Ovsiienko, Matan Azrad, Raslan Darawsheh
Hi,
I have cloned dpdk code from github
[xiaopiya@fedora30 dpdk]$ git remote -v
origin https://github.com/DPDK/dpdk.git (fetch)
origin https://github.com/DPDK/dpdk.git (push)
which tag should I use?
Or do I have to download 20.11.3 from git.dpdk.org?
Sorry, I don’t know the relation between https://github.com/DPDK and git.dpdk.org
Best regards
Yan Xiaoping
From: Asaf Penso <asafp@nvidia.com>
Sent: 2021年10月14日 19:49
To: Yan, Xiaoping (NSB - CN/Hangzhou) <xiaoping.yan@nokia-sbell.com>; users@dpdk.org
Cc: Slava Ovsiienko <viacheslavo@nvidia.com>; Matan Azrad <matan@nvidia.com>; Raslan Darawsheh <rasland@nvidia.com>
Subject: RE: mlx5 VF packet lost between rx_port_unicast_packets and rx_good_packets
This is the commit id which we think solves the issue you see:
https://git.dpdk.org/dpdk-stable/commit/?h=v20.11.3&id=ede02cfc4783446c4068a5a1746f045465364fac
Regards,
Asaf Penso
From: Yan, Xiaoping (NSB - CN/Hangzhou) <xiaoping.yan@nokia-sbell.com<mailto:xiaoping.yan@nokia-sbell.com>>
Sent: Thursday, October 14, 2021 1:15 PM
To: Asaf Penso <asafp@nvidia.com<mailto:asafp@nvidia.com>>; users@dpdk.org<mailto:users@dpdk.org>
Cc: Slava Ovsiienko <viacheslavo@nvidia.com<mailto:viacheslavo@nvidia.com>>; Matan Azrad <matan@nvidia.com<mailto:matan@nvidia.com>>; Raslan Darawsheh <rasland@nvidia.com<mailto:rasland@nvidia.com>>
Subject: RE: mlx5 VF packet lost between rx_port_unicast_packets and rx_good_packets
Hi,
Ok, I will try. (probably some days later as I’m busing with another task right now)
Could you also share me the commit id for those fixes?
Thank you.
Best regards
Yan Xiaoping
From: Asaf Penso <asafp@nvidia.com<mailto:asafp@nvidia.com>>
Sent: 2021年10月14日 17:51
To: Yan, Xiaoping (NSB - CN/Hangzhou) <xiaoping.yan@nokia-sbell.com<mailto:xiaoping.yan@nokia-sbell.com>>; users@dpdk.org<mailto:users@dpdk.org>
Cc: Slava Ovsiienko <viacheslavo@nvidia.com<mailto:viacheslavo@nvidia.com>>; Matan Azrad <matan@nvidia.com<mailto:matan@nvidia.com>>; Raslan Darawsheh <rasland@nvidia.com<mailto:rasland@nvidia.com>>
Subject: RE: mlx5 VF packet lost between rx_port_unicast_packets and rx_good_packets
Can you please try the last LTS 20.11.3?
We have some related fixes and we think the issue is already solved.
Regards,
Asaf Penso
From: Yan, Xiaoping (NSB - CN/Hangzhou) <xiaoping.yan@nokia-sbell.com<mailto:xiaoping.yan@nokia-sbell.com>>
Sent: Thursday, October 14, 2021 12:33 PM
To: Asaf Penso <asafp@nvidia.com<mailto:asafp@nvidia.com>>; users@dpdk.org<mailto:users@dpdk.org>
Cc: Slava Ovsiienko <viacheslavo@nvidia.com<mailto:viacheslavo@nvidia.com>>; Matan Azrad <matan@nvidia.com<mailto:matan@nvidia.com>>; Raslan Darawsheh <rasland@nvidia.com<mailto:rasland@nvidia.com>>
Subject: RE: mlx5 VF packet lost between rx_port_unicast_packets and rx_good_packets
Hi,
I’m using 20.11
commit b1d36cf828771e28eb0130b59dcf606c2a0bc94d (HEAD, tag: v20.11)
Author: Thomas Monjalon <thomas@monjalon.net<mailto:thomas@monjalon.net>>
Date: Fri Nov 27 19:48:48 2020 +0100
version: 20.11.0
Best regards
Yan Xiaoping
From: Asaf Penso <asafp@nvidia.com<mailto:asafp@nvidia.com>>
Sent: 2021年10月14日 14:56
To: Yan, Xiaoping (NSB - CN/Hangzhou) <xiaoping.yan@nokia-sbell.com<mailto:xiaoping.yan@nokia-sbell.com>>; users@dpdk.org<mailto:users@dpdk.org>
Cc: Slava Ovsiienko <viacheslavo@nvidia.com<mailto:viacheslavo@nvidia.com>>; Matan Azrad <matan@nvidia.com<mailto:matan@nvidia.com>>; Raslan Darawsheh <rasland@nvidia.com<mailto:rasland@nvidia.com>>
Subject: RE: mlx5 VF packet lost between rx_port_unicast_packets and rx_good_packets
Are you using the latest stable 20.11.3? If not, can you try?
Regards,
Asaf Penso
From: Yan, Xiaoping (NSB - CN/Hangzhou) <xiaoping.yan@nokia-sbell.com<mailto:xiaoping.yan@nokia-sbell.com>>
Sent: Thursday, September 30, 2021 11:05 AM
To: Asaf Penso <asafp@nvidia.com<mailto:asafp@nvidia.com>>; users@dpdk.org<mailto:users@dpdk.org>
Cc: Slava Ovsiienko <viacheslavo@nvidia.com<mailto:viacheslavo@nvidia.com>>; Matan Azrad <matan@nvidia.com<mailto:matan@nvidia.com>>; Raslan Darawsheh <rasland@nvidia.com<mailto:rasland@nvidia.com>>
Subject: RE: mlx5 VF packet lost between rx_port_unicast_packets and rx_good_packets
Hi,
In below log, we can clearly see packets are dropped between counter rx_unicast_packets and rx_good_packets
But there is not any error/miss counter tell why/where packet is dropped.
Is this a known bug/limitation of Mellanox card?
Any suggestion?
Counter in test center(traffic generator):
Tx count: 617496152
Rx count: 617475672
Drop: 20480
testpmd started with:
dpdk-testpmd -l "2,3" --legacy-mem --socket-mem "5000,0" -a 0000:03:07.0 -- -i --nb-cores=1 --portmask=0x1 --rxd=512 --txd=512
testpmd> port stop 0
testpmd> vlan set filter on 0
testpmd> rx_vlan add 767 0
testpmd> port start 0
testpmd> set fwd 5tswap
testpmd> start
testpmd> show fwd stats all
---------------------- Forward statistics for port 0 ----------------------
RX-packets: 617475727 RX-dropped: 0 RX-total: 617475727
TX-packets: 617475727 TX-dropped: 0 TX-total: 617475727
----------------------------------------------------------------------------
+++++++++++++++ Accumulated forward statistics for all ports+++++++++++++++
RX-packets: 617475727 RX-dropped: 0 RX-total: 617475727
TX-packets: 617475727 TX-dropped: 0 TX-total: 617475727
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
testpmd> show port xstats 0
###### NIC extended statistics for port 0
rx_good_packets: 617475731
tx_good_packets: 617475730
rx_good_bytes: 45693207378
tx_good_bytes: 45693207036
rx_missed_errors: 0
rx_errors: 0
tx_errors: 0
rx_mbuf_allocation_errors: 0
rx_q0_packets: 617475731
rx_q0_bytes: 45693207378
rx_q0_errors: 0
tx_q0_packets: 617475730
tx_q0_bytes: 45693207036
rx_wqe_errors: 0
rx_unicast_packets: 617496152
rx_unicast_bytes: 45694715248
tx_unicast_packets: 617475730
tx_unicast_bytes: 45693207036
rx_multicast_packets: 3
rx_multicast_bytes: 342
tx_multicast_packets: 0
tx_multicast_bytes: 0
rx_broadcast_packets: 56
rx_broadcast_bytes: 7308
tx_broadcast_packets: 0
tx_broadcast_bytes: 0
tx_phy_packets: 0
rx_phy_packets: 0
rx_phy_crc_errors: 0
tx_phy_bytes: 0
rx_phy_bytes: 0
rx_phy_in_range_len_errors: 0
rx_phy_symbol_errors: 0
rx_phy_discard_packets: 0
tx_phy_discard_packets: 0
tx_phy_errors: 0
rx_out_of_buffer: 0
tx_pp_missed_interrupt_errors: 0
tx_pp_rearm_queue_errors: 0
tx_pp_clock_queue_errors: 0
tx_pp_timestamp_past_errors: 0
tx_pp_timestamp_future_errors: 0
tx_pp_jitter: 0
tx_pp_wander: 0
tx_pp_sync_lost: 0
Best regards
Yan Xiaoping
From: Yan, Xiaoping (NSB - CN/Hangzhou)
Sent: 2021年9月29日 16:26
To: 'Asaf Penso' <asafp@nvidia.com<mailto:asafp@nvidia.com>>
Cc: 'Slava Ovsiienko' <viacheslavo@nvidia.com<mailto:viacheslavo@nvidia.com>>; 'Matan Azrad' <matan@nvidia.com<mailto:matan@nvidia.com>>; 'Raslan Darawsheh' <rasland@nvidia.com<mailto:rasland@nvidia.com>>; Xu, Meng-Maggie (NSB - CN/Hangzhou) <meng-maggie.xu@nokia-sbell.com<mailto:meng-maggie.xu@nokia-sbell.com>>
Subject: RE: mlx5 VF packet lost between rx_port_unicast_packets and rx_good_packets
Hi,
We replaced the NIC also (originally it was cx-4, now it is cx-5), but result is the same.
Do you know why the packet is dropped between rx_port_unicast_packets and rx_good_packets, but there is no error/miss counter?
And do you know mlx5_xxx kernel thread?
They have cpu affinity to all cpu cores, including the core used by fastpath/testpmd.
Would it affect?
[cranuser1@hztt24f-rm17-ocp-sno-1 ~]$ taskset -cp 74548
pid 74548's current affinity list: 0-27
[cranuser1@hztt24f-rm17-ocp-sno-1 ~]$ ps -emo pid,tid,psr,comm | grep mlx5
903 - - mlx5_health0000
904 - - mlx5_page_alloc
907 - - mlx5_cmd_0000:0
916 - - mlx5_events
917 - - mlx5_esw_wq
918 - - mlx5_fw_tracer
919 - - mlx5_hv_vhca
921 - - mlx5_fc
924 - - mlx5_health0000
925 - - mlx5_page_alloc
927 - - mlx5_cmd_0000:0
935 - - mlx5_events
936 - - mlx5_esw_wq
937 - - mlx5_fw_tracer
938 - - mlx5_hv_vhca
939 - - mlx5_fc
941 - - mlx5_health0000
942 - - mlx5_page_alloc
Best regards
Yan Xiaoping
From: Yan, Xiaoping (NSB - CN/Hangzhou)
Sent: 2021年9月29日 15:03
To: 'Asaf Penso' <asafp@nvidia.com<mailto:asafp@nvidia.com>>
Cc: Slava Ovsiienko <viacheslavo@nvidia.com<mailto:viacheslavo@nvidia.com>>; Matan Azrad <matan@nvidia.com<mailto:matan@nvidia.com>>; Raslan Darawsheh <rasland@nvidia.com<mailto:rasland@nvidia.com>>; Xu, Meng-Maggie (NSB - CN/Hangzhou) <meng-maggie.xu@nokia-sbell.com<mailto:meng-maggie.xu@nokia-sbell.com>>
Subject: RE: mlx5 VF packet lost between rx_port_unicast_packets and rx_good_packets
Hi,
It is 20.11 (We upgraded to 20.11 recently).
Best regards
Yan Xiaoping
From: Asaf Penso <asafp@nvidia.com<mailto:asafp@nvidia.com>>
Sent: 2021年9月29日 14:47
To: Yan, Xiaoping (NSB - CN/Hangzhou) <xiaoping.yan@nokia-sbell.com<mailto:xiaoping.yan@nokia-sbell.com>>
Cc: Slava Ovsiienko <viacheslavo@nvidia.com<mailto:viacheslavo@nvidia.com>>; Matan Azrad <matan@nvidia.com<mailto:matan@nvidia.com>>; Raslan Darawsheh <rasland@nvidia.com<mailto:rasland@nvidia.com>>; Xu, Meng-Maggie (NSB - CN/Hangzhou) <meng-maggie.xu@nokia-sbell.com<mailto:meng-maggie.xu@nokia-sbell.com>>
Subject: Re: mlx5 VF packet lost between rx_port_unicast_packets and rx_good_packets
What dpdk version are you using?
19.11 doesn't support 5tswap mode in testpmd.
Regards,
Asaf Penso
________________________________
From: Yan, Xiaoping (NSB - CN/Hangzhou) <xiaoping.yan@nokia-sbell.com<mailto:xiaoping.yan@nokia-sbell.com>>
Sent: Monday, September 27, 2021 5:55:21 AM
To: Asaf Penso <asafp@nvidia.com<mailto:asafp@nvidia.com>>
Cc: Slava Ovsiienko <viacheslavo@nvidia.com<mailto:viacheslavo@nvidia.com>>; Matan Azrad <matan@nvidia.com<mailto:matan@nvidia.com>>; Raslan Darawsheh <rasland@nvidia.com<mailto:rasland@nvidia.com>>; Xu, Meng-Maggie (NSB - CN/Hangzhou) <meng-maggie.xu@nokia-sbell.com<mailto:meng-maggie.xu@nokia-sbell.com>>
Subject: RE: mlx5 VF packet lost between rx_port_unicast_packets and rx_good_packets
Hi,
I tried also with testpmd with such command and configuration:
dpdk-testpmd -l "4,5" --legacy-mem --socket-mem "5000,0" -a 0000:03:02.0 -- -i --nb-cores=1 --portmask=0x1 --rxd=512 --txd=512
testpmd> port stop 0
testpmd> vlan set filter on 0
testpmd> rx_vlan add 767 0
testpmd> port start 0
testpmd> set fwd 5tswap
testpmd> start
it only gets 1.4mpps.
with 1.5mpps, it starts to drop packets occasionally.
Best regards
Yan Xiaoping
From: Yan, Xiaoping (NSB - CN/Hangzhou)
Sent: 2021年9月26日 13:19
To: 'Asaf Penso' <asafp@nvidia.com<mailto:asafp@nvidia.com>>
Cc: Slava Ovsiienko <viacheslavo@nvidia.com<mailto:viacheslavo@nvidia.com>>; Matan Azrad <matan@nvidia.com<mailto:matan@nvidia.com>>; Raslan Darawsheh <rasland@nvidia.com<mailto:rasland@nvidia.com>>; Xu, Meng-Maggie (NSB - CN/Hangzhou) <meng-maggie.xu@nokia-sbell.com<mailto:meng-maggie.xu@nokia-sbell.com>>
Subject: RE: mlx5 VF packet lost between rx_port_unicast_packets and rx_good_packets
Hi,
I was using 6wind fastpath instead of testpmd.
>> Do you configure any flow?
I think not, but is there any command to check?
>> Do you work in isolate mode?
Do you mean the CPU?
The dpdk application (6wind fastpath) run inside container and it is using CPU core from exclusive pool<https://github.com/nokia/CPU-Pooler>
On the otherhand, the cpu isolation is done by host infrastructure and a bit complicated, I’m not sure if there is really no any other task run in this core.
BTW, we recently switched the host infra to redhat openshift container platform, and same problem is there…
We can get 1.6mpps with intel 810 NIC, but we can only gets 1mpps for mlx.
I raised also a ticket to mellanox Support
https://support.mellanox.com/s/case/5001T00001ZC0jzQAD
There is log about cpu affinity, and some mlx5_xxx threads seems strange to me…
Can you please also check the ticket?
Best regards
Yan Xiaoping
From: Asaf Penso <asafp@nvidia.com<mailto:asafp@nvidia.com>>
Sent: 2021年9月26日 12:57
To: Yan, Xiaoping (NSB - CN/Hangzhou) <xiaoping.yan@nokia-sbell.com<mailto:xiaoping.yan@nokia-sbell.com>>
Cc: Slava Ovsiienko <viacheslavo@nvidia.com<mailto:viacheslavo@nvidia.com>>; Matan Azrad <matan@nvidia.com<mailto:matan@nvidia.com>>; Raslan Darawsheh <rasland@nvidia.com<mailto:rasland@nvidia.com>>
Subject: RE: mlx5 VF packet lost between rx_port_unicast_packets and rx_good_packets
Hi,
Could you please share the testpmd command line you are using?
Do you configure any flow? Do you work in isolate mode?
Regards,
Asaf Penso
From: Yan, Xiaoping (NSB - CN/Hangzhou) <xiaoping.yan@nokia-sbell.com<mailto:xiaoping.yan@nokia-sbell.com>>
Sent: Monday, July 26, 2021 7:52 AM
To: Asaf Penso <asafp@nvidia.com<mailto:asafp@nvidia.com>>; users@dpdk.org<mailto:users@dpdk.org>
Cc: Slava Ovsiienko <viacheslavo@nvidia.com<mailto:viacheslavo@nvidia.com>>; Matan Azrad <matan@nvidia.com<mailto:matan@nvidia.com>>; Raslan Darawsheh <rasland@nvidia.com<mailto:rasland@nvidia.com>>
Subject: RE: mlx5 VF packet lost between rx_port_unicast_packets and rx_good_packets
Hi,
dpdk version in use is 19.11
I have not tried with latest upstream version.
It seems performance is affected by IPv6 neighbor advertisement packets coming to this interface
05:20:04.025290 IP6 fe80::6cf1:9fff:fe4e:8a01 > ff02::1: ICMP6, neighbor advertisement, tgt is fe80::6cf1:9fff:fe4e:8a01, length 32
0x0000: 3333 0000 0001 6ef1 9f4e 8a01 86dd 6008
0x0010: fe44 0020 3aff fe80 0000 0000 0000 6cf1
0x0020: 9fff fe4e 8a01 ff02 0000 0000 0000 0000
0x0030: 0000 0000 0001 8800 96d9 2000 0000 fe80
0x0040: 0000 0000 0000 6cf1 9fff fe4e 8a01 0201
0x0050: 6ef1 9f4e 8a01
Somehow, there are about 100 such packets per second coming to the interface, and packet loss happens.
When we change default vlan in switch so that there is no such packets come to the interface (the mlx5 VF under test), there is not packet loss anymore.
In both cases, all packets have arrived to rx_vport_unicast_packets.
In the packet loss case, we see less packets in rx_good_packets (rx_vport_unicast_packets = rx_good_packets + lost packet).
If the dpdk application is too slow to receive all packets from the VF, is there any counter to indicate this?
Any suggestion?
Thank you.
Best regards
Yan Xiaoping
-----Original Message-----
From: Asaf Penso <asafp@nvidia.com<mailto:asafp@nvidia.com>>
Sent: 2021年7月13日 20:36
To: Yan, Xiaoping (NSB - CN/Hangzhou) <xiaoping.yan@nokia-sbell.com<mailto:xiaoping.yan@nokia-sbell.com>>; users@dpdk.org<mailto:users@dpdk.org>
Cc: Slava Ovsiienko <viacheslavo@nvidia.com<mailto:viacheslavo@nvidia.com>>; Matan Azrad <matan@nvidia.com<mailto:matan@nvidia.com>>; Raslan Darawsheh <rasland@nvidia.com<mailto:rasland@nvidia.com>>
Subject: RE: mlx5 VF packet lost between rx_port_unicast_packets and rx_good_packets
Hello Yan,
Can you please mention which DPDK version you use and whether you see this issue also with latest upstream version?
Regards,
Asaf Penso
>-----Original Message-----
>From: users <users-bounces@dpdk.org<mailto:users-bounces@dpdk.org>> On Behalf Of Yan, Xiaoping (NSB -
>CN/Hangzhou)
>Sent: Monday, July 5, 2021 1:08 PM
>To: users@dpdk.org<mailto:users@dpdk.org>
>Subject: [dpdk-users] mlx5 VF packet lost between
>rx_port_unicast_packets and rx_good_packets
>
>Hi,
>
>When doing traffic loopback test on a mlx5 VF, we found there are some
>packet loss (not all packet received back ).
>
>From xstats counters, I found all packets have been received in
>rx_port_unicast_packets, but rx_good_packets has lower counter, and
>rx_port_unicast_packets - rx_good_packets = lost packets i.e. packet
>lost between rx_port_unicast_packets and rx_good_packets.
>But I can not find any other counter indicating where exactly those
>packets are lost.
>
>Any idea?
>
>Attached is the counter logs. (bf is before the test, af is after the
>test, fp-cli dpdk-port-stats is the command used to get xstats, and
>ethtool -S _f1 (the vf
>used) also printed) Test equipment reports that it sends: 2911176
>packets,
>receives: 2909474, dropped: 1702 And the xstats (after - before) shows
>rx_port_unicast_packets 2911177, rx_good_packets 2909475, so drop
>(2911177 - rx_good_packets) is 1702
>
>BTW, I also noticed this discussion "packet loss between phy and good
>counter"
>http://mails.dpdk.org/archives/users/2018-July/003271.html
>but my case seems to be different as packet also received in
>rx_port_unicast_packets, and I checked counter from pf (ethtool -S
>ens1f0 in attached log), rx_discards_phy is not increasing.
>
>Thank you.
>
>Best regards
>Yan Xiaoping
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: mlx5 VF packet lost between rx_port_unicast_packets and rx_good_packets
2021-10-18 9:28 ` Yan, Xiaoping (NSB - CN/Hangzhou)
@ 2021-10-18 10:45 ` David Marchand
2021-10-27 6:17 ` Yan, Xiaoping (NSB - CN/Hangzhou)
0 siblings, 1 reply; 19+ messages in thread
From: David Marchand @ 2021-10-18 10:45 UTC (permalink / raw)
To: Yan, Xiaoping (NSB - CN/Hangzhou)
Cc: Asaf Penso, users, Slava Ovsiienko, Matan Azrad, Raslan Darawsheh
On Mon, Oct 18, 2021 at 11:28 AM Yan, Xiaoping (NSB - CN/Hangzhou)
<xiaoping.yan@nokia-sbell.com> wrote:
> I have cloned dpdk code from github
>
> [xiaopiya@fedora30 dpdk]$ git remote -v
> origin https://github.com/DPDK/dpdk.git (fetch)
> origin https://github.com/DPDK/dpdk.git (push)
>
> which tag should I use?
>
> Or do I have to download 20.11.3 from git.dpdk.org?
>
> Sorry, I don’t know the relation between https://github.com/DPDK and git.dpdk.org?
Github DPDK/dpdk repo is a replication of the main repo hosted on
dpdk.org servers.
The official git repos and releases tarballs are on dpdk.org servers.
The list of official releases tarballs is at: http://core.dpdk.org/download/
The main repo git is at: https://git.dpdk.org/dpdk/
The LTS/stable releases repo git is at: https://git.dpdk.org/dpdk-stable/
--
David Marchand
^ permalink raw reply [flat|nested] 19+ messages in thread
* RE: mlx5 VF packet lost between rx_port_unicast_packets and rx_good_packets
2021-10-18 10:45 ` David Marchand
@ 2021-10-27 6:17 ` Yan, Xiaoping (NSB - CN/Hangzhou)
2021-10-27 7:26 ` David Marchand
2021-10-27 7:54 ` Martin Weiser
0 siblings, 2 replies; 19+ messages in thread
From: Yan, Xiaoping (NSB - CN/Hangzhou) @ 2021-10-27 6:17 UTC (permalink / raw)
To: David Marchand
Cc: Asaf Penso, users, Slava Ovsiienko, Matan Azrad, Raslan Darawsheh
Hi,
I tried with dpdk 20.11-3 downloaded from https://fast.dpdk.org/rel/dpdk-20.11.3.tar.xz
Problem still exist:
1. there is packet loss with 2mpps (small packet),
2. no counter for the dropped packet in NIC.
traffic generator stats: sends 41990938, receives back 41986105, lost 4833
testpmd fwd stats: RX-packets: 41986110, TX-packets: 41986110
port xstats: rx_unicast_packets: 41990938 (all packets reached to the NIC port), rx_good_packets: 41986111 (some is lost), but there is not any counter of the lost packet.
Here is the log:
[root@up-0 /]# dpdk-testpmd -l "2,3" --legacy-mem --socket-mem "5000,0" -a 0000:03:06.7 -- -i --nb-cores=1 --portmask=0x1 --rxd=512 --txd=512
EAL: Detected 28 lcore(s)
EAL: Detected 2 NUMA nodes
EAL: Detected static linkage of DPDK
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'VA'
EAL: No available hugepages reported in hugepages-2048kB
EAL: Probing VFIO support...
EAL: Probe PCI driver: mlx5_pci (15b3:1018) device: 0000:03:06.7 (socket 0)
mlx5_pci: cannot bind mlx5 socket: Read-only file system
mlx5_pci: Cannot initialize socket: Read-only file system
EAL: No legacy callbacks, legacy socket not created
Interactive-mode selected
testpmd: create a new mbuf pool <mb_pool_0>: n=155456, size=2176, socket=0
testpmd: preferred mempool ops selected: ring_mp_mc
Warning! port-topology=paired and odd forward ports number, the last port will pair with itself.
Configuring Port 0 (socket 0)
Port 0: 7A:9A:8A:A6:86:93
Checking link statuses...
Done
testpmd> port stop 0
Stopping ports...
Checking link statuses...
Done
testpmd> vlan set filter on 0
testpmd> rx_vlan add 767 0
testpmd> port start 0
Port 0: 7A:9A:8A:A6:86:93
Checking link statuses...
Done
testpmd> set fwd 5tswap
Set 5tswap packet forwarding mode
testpmd> start
5tswap packet forwarding - ports=1 - cores=1 - streams=1 - NUMA support enabled, MP allocation mode: native
Logical Core 3 (socket 0) forwards packets on 1 streams:
RX P=0/Q=0 (socket 0) -> TX P=0/Q=0 (socket 0) peer=02:00:00:00:00:00
5tswap packet forwarding packets/burst=32
nb forwarding cores=1 - nb forwarding ports=1
port 0: RX queue number: 1 Tx queue number: 1
Rx offloads=0x200 Tx offloads=0x0
RX queue: 0
RX desc=512 - RX free threshold=64
RX threshold registers: pthresh=0 hthresh=0 wthresh=0
RX Offloads=0x200
TX queue: 0
TX desc=512 - TX free threshold=0
TX threshold registers: pthresh=0 hthresh=0 wthresh=0
TX offloads=0x0 - TX RS bit threshold=0
testpmd> show fwd stats all
---------------------- Forward statistics for port 0 ----------------------
RX-packets: 41986110 RX-dropped: 0 RX-total: 41986110
TX-packets: 41986110 TX-dropped: 0 TX-total: 41986110
----------------------------------------------------------------------------
testpmd> show port xstats 0
###### NIC extended statistics for port 0
rx_good_packets: 41986111
tx_good_packets: 41986111
rx_good_bytes: 3106973594
tx_good_bytes: 3106973594
rx_missed_errors: 0
rx_errors: 0
tx_errors: 0
rx_mbuf_allocation_errors: 0
rx_q0_packets: 41986111
rx_q0_bytes: 3106973594
rx_q0_errors: 0
tx_q0_packets: 41986111
tx_q0_bytes: 3106973594
rx_wqe_errors: 0
rx_unicast_packets: 41990938
rx_unicast_bytes: 3107329412
tx_unicast_packets: 41986111
tx_unicast_bytes: 3106973594
rx_multicast_packets: 1
rx_multicast_bytes: 114
tx_multicast_packets: 0
tx_multicast_bytes: 0
rx_broadcast_packets: 5
rx_broadcast_bytes: 1710
tx_broadcast_packets: 0
tx_broadcast_bytes: 0
tx_phy_packets: 0
rx_phy_packets: 0
rx_phy_crc_errors: 0
tx_phy_bytes: 0
rx_phy_bytes: 0
rx_phy_in_range_len_errors: 0
rx_phy_symbol_errors: 0
rx_phy_discard_packets: 0
tx_phy_discard_packets: 0
tx_phy_errors: 0
rx_out_of_buffer: 0
tx_pp_missed_interrupt_errors: 0
tx_pp_rearm_queue_errors: 0
tx_pp_clock_queue_errors: 0
tx_pp_timestamp_past_errors: 0
tx_pp_timestamp_future_errors: 0
tx_pp_jitter: 0
tx_pp_wander: 0
tx_pp_sync_lost: 0
testpmd> q
Command not found
testpmd> exit
Command not found
testpmd> quit
Telling cores to stop...
Waiting for lcores to finish...
---------------------- Forward statistics for port 0 ----------------------
RX-packets: 41986112 RX-dropped: 0 RX-total: 41986112
TX-packets: 41986112 TX-dropped: 0 TX-total: 41986112
----------------------------------------------------------------------------
Best regards
Yan Xiaoping
-----Original Message-----
From: David Marchand <david.marchand@redhat.com>
Sent: 2021年10月18日 18:45
To: Yan, Xiaoping (NSB - CN/Hangzhou) <xiaoping.yan@nokia-sbell.com>
Cc: Asaf Penso <asafp@nvidia.com>; users@dpdk.org; Slava Ovsiienko <viacheslavo@nvidia.com>; Matan Azrad <matan@nvidia.com>; Raslan Darawsheh <rasland@nvidia.com>
Subject: Re: mlx5 VF packet lost between rx_port_unicast_packets and rx_good_packets
On Mon, Oct 18, 2021 at 11:28 AM Yan, Xiaoping (NSB - CN/Hangzhou) <xiaoping.yan@nokia-sbell.com> wrote:
> I have cloned dpdk code from github
>
> [xiaopiya@fedora30 dpdk]$ git remote -v origin
> https://github.com/DPDK/dpdk.git (fetch) origin
> https://github.com/DPDK/dpdk.git (push)
>
> which tag should I use?
>
> Or do I have to download 20.11.3 from git.dpdk.org?
>
> Sorry, I don’t know the relation between https://github.com/DPDK and git.dpdk.org?
Github DPDK/dpdk repo is a replication of the main repo hosted on dpdk.org servers.
The official git repos and releases tarballs are on dpdk.org servers.
The list of official releases tarballs is at: http://core.dpdk.org/download/ The main repo git is at: https://git.dpdk.org/dpdk/ The LTS/stable releases repo git is at: https://git.dpdk.org/dpdk-stable/
--
David Marchand
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: mlx5 VF packet lost between rx_port_unicast_packets and rx_good_packets
2021-10-27 6:17 ` Yan, Xiaoping (NSB - CN/Hangzhou)
@ 2021-10-27 7:26 ` David Marchand
2021-10-27 7:54 ` Martin Weiser
1 sibling, 0 replies; 19+ messages in thread
From: David Marchand @ 2021-10-27 7:26 UTC (permalink / raw)
To: Yan, Xiaoping (NSB - CN/Hangzhou), Slava Ovsiienko, Matan Azrad
Cc: Asaf Penso, users, Raslan Darawsheh
On Wed, Oct 27, 2021 at 8:18 AM Yan, Xiaoping (NSB - CN/Hangzhou)
<xiaoping.yan@nokia-sbell.com> wrote:
>
> Hi,
>
> I tried with dpdk 20.11-3 downloaded from https://fast.dpdk.org/rel/dpdk-20.11.3.tar.xz
> Problem still exist:
> 1. there is packet loss with 2mpps (small packet),
> 2. no counter for the dropped packet in NIC.
For mlx maintainers.
Thanks.
--
David Marchand
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: mlx5 VF packet lost between rx_port_unicast_packets and rx_good_packets
2021-10-27 6:17 ` Yan, Xiaoping (NSB - CN/Hangzhou)
2021-10-27 7:26 ` David Marchand
@ 2021-10-27 7:54 ` Martin Weiser
2021-10-28 1:39 ` Yan, Xiaoping (NSB - CN/Hangzhou)
1 sibling, 1 reply; 19+ messages in thread
From: Martin Weiser @ 2021-10-27 7:54 UTC (permalink / raw)
To: Yan, Xiaoping (NSB - CN/Hangzhou), David Marchand
Cc: Asaf Penso, users, Slava Ovsiienko, Matan Azrad, Raslan Darawsheh
Hi,
you may want to check the counter 'rx_prio0_buf_discard' with ethtool
(which is not available in DPDK xstats as it seems that this counter is
global for the card and not available per port).
I opened a ticket a while ago regarding this issue:
https://bugs.dpdk.org/show_bug.cgi?id=749
Best regards,
Martin
Am 27.10.21 um 08:17 schrieb Yan, Xiaoping (NSB - CN/Hangzhou):
> Hi,
>
> I tried with dpdk 20.11-3 downloaded from https://fast.dpdk.org/rel/dpdk-20.11.3.tar.xz
> Problem still exist:
> 1. there is packet loss with 2mpps (small packet),
> 2. no counter for the dropped packet in NIC.
>
> traffic generator stats: sends 41990938, receives back 41986105, lost 4833
> testpmd fwd stats: RX-packets: 41986110, TX-packets: 41986110
> port xstats: rx_unicast_packets: 41990938 (all packets reached to the NIC port), rx_good_packets: 41986111 (some is lost), but there is not any counter of the lost packet.
>
> Here is the log:
> [root@up-0 /]# dpdk-testpmd -l "2,3" --legacy-mem --socket-mem "5000,0" -a 0000:03:06.7 -- -i --nb-cores=1 --portmask=0x1 --rxd=512 --txd=512
> EAL: Detected 28 lcore(s)
> EAL: Detected 2 NUMA nodes
> EAL: Detected static linkage of DPDK
> EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
> EAL: Selected IOVA mode 'VA'
> EAL: No available hugepages reported in hugepages-2048kB
> EAL: Probing VFIO support...
> EAL: Probe PCI driver: mlx5_pci (15b3:1018) device: 0000:03:06.7 (socket 0)
> mlx5_pci: cannot bind mlx5 socket: Read-only file system
> mlx5_pci: Cannot initialize socket: Read-only file system
> EAL: No legacy callbacks, legacy socket not created
> Interactive-mode selected
> testpmd: create a new mbuf pool <mb_pool_0>: n=155456, size=2176, socket=0
> testpmd: preferred mempool ops selected: ring_mp_mc
>
> Warning! port-topology=paired and odd forward ports number, the last port will pair with itself.
>
> Configuring Port 0 (socket 0)
> Port 0: 7A:9A:8A:A6:86:93
> Checking link statuses...
> Done
> testpmd> port stop 0
> Stopping ports...
> Checking link statuses...
> Done
> testpmd> vlan set filter on 0
> testpmd> rx_vlan add 767 0
> testpmd> port start 0
> Port 0: 7A:9A:8A:A6:86:93
> Checking link statuses...
> Done
> testpmd> set fwd 5tswap
> Set 5tswap packet forwarding mode
> testpmd> start
> 5tswap packet forwarding - ports=1 - cores=1 - streams=1 - NUMA support enabled, MP allocation mode: native
> Logical Core 3 (socket 0) forwards packets on 1 streams:
> RX P=0/Q=0 (socket 0) -> TX P=0/Q=0 (socket 0) peer=02:00:00:00:00:00
>
> 5tswap packet forwarding packets/burst=32
> nb forwarding cores=1 - nb forwarding ports=1
> port 0: RX queue number: 1 Tx queue number: 1
> Rx offloads=0x200 Tx offloads=0x0
> RX queue: 0
> RX desc=512 - RX free threshold=64
> RX threshold registers: pthresh=0 hthresh=0 wthresh=0
> RX Offloads=0x200
> TX queue: 0
> TX desc=512 - TX free threshold=0
> TX threshold registers: pthresh=0 hthresh=0 wthresh=0
> TX offloads=0x0 - TX RS bit threshold=0
>
> testpmd> show fwd stats all
>
> ---------------------- Forward statistics for port 0 ----------------------
> RX-packets: 41986110 RX-dropped: 0 RX-total: 41986110
> TX-packets: 41986110 TX-dropped: 0 TX-total: 41986110
> ----------------------------------------------------------------------------
>
> testpmd> show port xstats 0
> ###### NIC extended statistics for port 0
> rx_good_packets: 41986111
> tx_good_packets: 41986111
> rx_good_bytes: 3106973594
> tx_good_bytes: 3106973594
> rx_missed_errors: 0
> rx_errors: 0
> tx_errors: 0
> rx_mbuf_allocation_errors: 0
> rx_q0_packets: 41986111
> rx_q0_bytes: 3106973594
> rx_q0_errors: 0
> tx_q0_packets: 41986111
> tx_q0_bytes: 3106973594
> rx_wqe_errors: 0
> rx_unicast_packets: 41990938
> rx_unicast_bytes: 3107329412
> tx_unicast_packets: 41986111
> tx_unicast_bytes: 3106973594
> rx_multicast_packets: 1
> rx_multicast_bytes: 114
> tx_multicast_packets: 0
> tx_multicast_bytes: 0
> rx_broadcast_packets: 5
> rx_broadcast_bytes: 1710
> tx_broadcast_packets: 0
> tx_broadcast_bytes: 0
> tx_phy_packets: 0
> rx_phy_packets: 0
> rx_phy_crc_errors: 0
> tx_phy_bytes: 0
> rx_phy_bytes: 0
> rx_phy_in_range_len_errors: 0
> rx_phy_symbol_errors: 0
> rx_phy_discard_packets: 0
> tx_phy_discard_packets: 0
> tx_phy_errors: 0
> rx_out_of_buffer: 0
> tx_pp_missed_interrupt_errors: 0
> tx_pp_rearm_queue_errors: 0
> tx_pp_clock_queue_errors: 0
> tx_pp_timestamp_past_errors: 0
> tx_pp_timestamp_future_errors: 0
> tx_pp_jitter: 0
> tx_pp_wander: 0
> tx_pp_sync_lost: 0
> testpmd> q
> Command not found
> testpmd> exit
> Command not found
> testpmd> quit
> Telling cores to stop...
> Waiting for lcores to finish...
>
> ---------------------- Forward statistics for port 0 ----------------------
> RX-packets: 41986112 RX-dropped: 0 RX-total: 41986112
> TX-packets: 41986112 TX-dropped: 0 TX-total: 41986112
> ----------------------------------------------------------------------------
>
> Best regards
> Yan Xiaoping
>
> -----Original Message-----
> From: David Marchand <david.marchand@redhat.com>
> Sent: 2021年10月18日 18:45
> To: Yan, Xiaoping (NSB - CN/Hangzhou) <xiaoping.yan@nokia-sbell.com>
> Cc: Asaf Penso <asafp@nvidia.com>; users@dpdk.org; Slava Ovsiienko <viacheslavo@nvidia.com>; Matan Azrad <matan@nvidia.com>; Raslan Darawsheh <rasland@nvidia.com>
> Subject: Re: mlx5 VF packet lost between rx_port_unicast_packets and rx_good_packets
>
> On Mon, Oct 18, 2021 at 11:28 AM Yan, Xiaoping (NSB - CN/Hangzhou) <xiaoping.yan@nokia-sbell.com> wrote:
>> I have cloned dpdk code from github
>>
>> [xiaopiya@fedora30 dpdk]$ git remote -v origin
>> https://github.com/DPDK/dpdk.git (fetch) origin
>> https://github.com/DPDK/dpdk.git (push)
>>
>> which tag should I use?
>>
>> Or do I have to download 20.11.3 from git.dpdk.org?
>>
>> Sorry, I don’t know the relation between https://github.com/DPDK and git.dpdk.org?
> Github DPDK/dpdk repo is a replication of the main repo hosted on dpdk.org servers.
>
> The official git repos and releases tarballs are on dpdk.org servers.
> The list of official releases tarballs is at: http://core.dpdk.org/download/ The main repo git is at: https://git.dpdk.org/dpdk/ The LTS/stable releases repo git is at: https://git.dpdk.org/dpdk-stable/
>
>
> --
> David Marchand
>
^ permalink raw reply [flat|nested] 19+ messages in thread
* RE: mlx5 VF packet lost between rx_port_unicast_packets and rx_good_packets
2021-10-27 7:54 ` Martin Weiser
@ 2021-10-28 1:39 ` Yan, Xiaoping (NSB - CN/Hangzhou)
2021-10-28 1:58 ` Gerry Wan
0 siblings, 1 reply; 19+ messages in thread
From: Yan, Xiaoping (NSB - CN/Hangzhou) @ 2021-10-28 1:39 UTC (permalink / raw)
To: Martin Weiser, David Marchand, Asaf Penso, Slava Ovsiienko,
Matan Azrad, Raslan Darawsheh
Cc: users
Hi,
I checked the counter from PF with ethtool -S, there is no counter named 'rx_prio0_buf_discard'
Anyway, I checked all counters from ethtool output, there is not any counter reflects the dropped packets.
Any suggestion from mlx maintainer? @Matan Azrad @Asaf Penso @Slava Ovsiienko @Raslan Darawsheh
Thank you.
Best regards
Yan Xiaoping
-----Original Message-----
From: Martin Weiser <martin.weiser@allegro-packets.com>
Sent: 2021年10月27日 15:54
To: Yan, Xiaoping (NSB - CN/Hangzhou) <xiaoping.yan@nokia-sbell.com>; David Marchand <david.marchand@redhat.com>
Cc: Asaf Penso <asafp@nvidia.com>; users@dpdk.org; Slava Ovsiienko <viacheslavo@nvidia.com>; Matan Azrad <matan@nvidia.com>; Raslan Darawsheh <rasland@nvidia.com>
Subject: Re: mlx5 VF packet lost between rx_port_unicast_packets and rx_good_packets
Hi,
you may want to check the counter 'rx_prio0_buf_discard' with ethtool (which is not available in DPDK xstats as it seems that this counter is global for the card and not available per port).
I opened a ticket a while ago regarding this issue:
https://bugs.dpdk.org/show_bug.cgi?id=749
Best regards,
Martin
Am 27.10.21 um 08:17 schrieb Yan, Xiaoping (NSB - CN/Hangzhou):
> Hi,
>
> I tried with dpdk 20.11-3 downloaded from
> https://fast.dpdk.org/rel/dpdk-20.11.3.tar.xz
> Problem still exist:
> 1. there is packet loss with 2mpps (small packet), 2. no counter for
> the dropped packet in NIC.
>
> traffic generator stats: sends 41990938, receives back 41986105, lost
> 4833 testpmd fwd stats: RX-packets: 41986110, TX-packets: 41986110
> port xstats: rx_unicast_packets: 41990938 (all packets reached to the NIC port), rx_good_packets: 41986111 (some is lost), but there is not any counter of the lost packet.
>
> Here is the log:
> [root@up-0 /]# dpdk-testpmd -l "2,3" --legacy-mem --socket-mem
> "5000,0" -a 0000:03:06.7 -- -i --nb-cores=1 --portmask=0x1 --rxd=512
> --txd=512
> EAL: Detected 28 lcore(s)
> EAL: Detected 2 NUMA nodes
> EAL: Detected static linkage of DPDK
> EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
> EAL: Selected IOVA mode 'VA'
> EAL: No available hugepages reported in hugepages-2048kB
> EAL: Probing VFIO support...
> EAL: Probe PCI driver: mlx5_pci (15b3:1018) device: 0000:03:06.7
> (socket 0)
> mlx5_pci: cannot bind mlx5 socket: Read-only file system
> mlx5_pci: Cannot initialize socket: Read-only file system
> EAL: No legacy callbacks, legacy socket not created Interactive-mode
> selected
> testpmd: create a new mbuf pool <mb_pool_0>: n=155456, size=2176,
> socket=0
> testpmd: preferred mempool ops selected: ring_mp_mc
>
> Warning! port-topology=paired and odd forward ports number, the last port will pair with itself.
>
> Configuring Port 0 (socket 0)
> Port 0: 7A:9A:8A:A6:86:93
> Checking link statuses...
> Done
> testpmd> port stop 0
> Stopping ports...
> Checking link statuses...
> Done
> testpmd> vlan set filter on 0
> testpmd> rx_vlan add 767 0
> testpmd> port start 0
> Port 0: 7A:9A:8A:A6:86:93
> Checking link statuses...
> Done
> testpmd> set fwd 5tswap
> Set 5tswap packet forwarding mode
> testpmd> start
> 5tswap packet forwarding - ports=1 - cores=1 - streams=1 - NUMA
> support enabled, MP allocation mode: native Logical Core 3 (socket 0) forwards packets on 1 streams:
> RX P=0/Q=0 (socket 0) -> TX P=0/Q=0 (socket 0)
> peer=02:00:00:00:00:00
>
> 5tswap packet forwarding packets/burst=32
> nb forwarding cores=1 - nb forwarding ports=1
> port 0: RX queue number: 1 Tx queue number: 1
> Rx offloads=0x200 Tx offloads=0x0
> RX queue: 0
> RX desc=512 - RX free threshold=64
> RX threshold registers: pthresh=0 hthresh=0 wthresh=0
> RX Offloads=0x200
> TX queue: 0
> TX desc=512 - TX free threshold=0
> TX threshold registers: pthresh=0 hthresh=0 wthresh=0
> TX offloads=0x0 - TX RS bit threshold=0
>
> testpmd> show fwd stats all
>
> ---------------------- Forward statistics for port 0 ----------------------
> RX-packets: 41986110 RX-dropped: 0 RX-total: 41986110
> TX-packets: 41986110 TX-dropped: 0 TX-total: 41986110
>
> ----------------------------------------------------------------------
> ------
>
> testpmd> show port xstats 0
> ###### NIC extended statistics for port 0
> rx_good_packets: 41986111
> tx_good_packets: 41986111
> rx_good_bytes: 3106973594
> tx_good_bytes: 3106973594
> rx_missed_errors: 0
> rx_errors: 0
> tx_errors: 0
> rx_mbuf_allocation_errors: 0
> rx_q0_packets: 41986111
> rx_q0_bytes: 3106973594
> rx_q0_errors: 0
> tx_q0_packets: 41986111
> tx_q0_bytes: 3106973594
> rx_wqe_errors: 0
> rx_unicast_packets: 41990938
> rx_unicast_bytes: 3107329412
> tx_unicast_packets: 41986111
> tx_unicast_bytes: 3106973594
> rx_multicast_packets: 1
> rx_multicast_bytes: 114
> tx_multicast_packets: 0
> tx_multicast_bytes: 0
> rx_broadcast_packets: 5
> rx_broadcast_bytes: 1710
> tx_broadcast_packets: 0
> tx_broadcast_bytes: 0
> tx_phy_packets: 0
> rx_phy_packets: 0
> rx_phy_crc_errors: 0
> tx_phy_bytes: 0
> rx_phy_bytes: 0
> rx_phy_in_range_len_errors: 0
> rx_phy_symbol_errors: 0
> rx_phy_discard_packets: 0
> tx_phy_discard_packets: 0
> tx_phy_errors: 0
> rx_out_of_buffer: 0
> tx_pp_missed_interrupt_errors: 0
> tx_pp_rearm_queue_errors: 0
> tx_pp_clock_queue_errors: 0
> tx_pp_timestamp_past_errors: 0
> tx_pp_timestamp_future_errors: 0
> tx_pp_jitter: 0
> tx_pp_wander: 0
> tx_pp_sync_lost: 0
> testpmd> q
> Command not found
> testpmd> exit
> Command not found
> testpmd> quit
> Telling cores to stop...
> Waiting for lcores to finish...
>
> ---------------------- Forward statistics for port 0 ----------------------
> RX-packets: 41986112 RX-dropped: 0 RX-total: 41986112
> TX-packets: 41986112 TX-dropped: 0 TX-total: 41986112
>
> ----------------------------------------------------------------------
> ------
>
> Best regards
> Yan Xiaoping
>
> -----Original Message-----
> From: David Marchand <david.marchand@redhat.com>
> Sent: 2021年10月18日 18:45
> To: Yan, Xiaoping (NSB - CN/Hangzhou) <xiaoping.yan@nokia-sbell.com>
> Cc: Asaf Penso <asafp@nvidia.com>; users@dpdk.org; Slava Ovsiienko
> <viacheslavo@nvidia.com>; Matan Azrad <matan@nvidia.com>; Raslan
> Darawsheh <rasland@nvidia.com>
> Subject: Re: mlx5 VF packet lost between rx_port_unicast_packets and
> rx_good_packets
>
> On Mon, Oct 18, 2021 at 11:28 AM Yan, Xiaoping (NSB - CN/Hangzhou) <xiaoping.yan@nokia-sbell.com> wrote:
>> I have cloned dpdk code from github
>>
>> [xiaopiya@fedora30 dpdk]$ git remote -v origin
>> https://github.com/DPDK/dpdk.git (fetch) origin
>> https://github.com/DPDK/dpdk.git (push)
>>
>> which tag should I use?
>>
>> Or do I have to download 20.11.3 from git.dpdk.org?
>>
>> Sorry, I don’t know the relation between https://github.com/DPDK and git.dpdk.org?
> Github DPDK/dpdk repo is a replication of the main repo hosted on dpdk.org servers.
>
> The official git repos and releases tarballs are on dpdk.org servers.
> The list of official releases tarballs is at:
> http://core.dpdk.org/download/ The main repo git is at:
> https://git.dpdk.org/dpdk/ The LTS/stable releases repo git is at:
> https://git.dpdk.org/dpdk-stable/
>
>
> --
> David Marchand
>
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: mlx5 VF packet lost between rx_port_unicast_packets and rx_good_packets
2021-10-28 1:39 ` Yan, Xiaoping (NSB - CN/Hangzhou)
@ 2021-10-28 1:58 ` Gerry Wan
2021-10-29 0:44 ` Yan, Xiaoping (NSB - CN/Hangzhou)
0 siblings, 1 reply; 19+ messages in thread
From: Gerry Wan @ 2021-10-28 1:58 UTC (permalink / raw)
To: Yan, Xiaoping (NSB - CN/Hangzhou)
Cc: Martin Weiser, David Marchand, Asaf Penso, Slava Ovsiienko,
Matan Azrad, Raslan Darawsheh, users
[-- Attachment #1: Type: text/plain, Size: 8504 bytes --]
Are the rx_missed_errors/rx_out_of_buffer counters always showing 0 no
matter how fast you push your generator?
I had a similar issue with missing counters on DPDK 20.11 that was fixed on
21.05 by applying this patch:
http://patchwork.dpdk.org/project/dpdk/patch/1614249901-307665-5-git-send-email-matan@nvidia.com/
Potentially relevant thread:
https://inbox.dpdk.org/users/CAAcwi38rs2Vk9MKhRGS3kAK+=dYAnDdECT7f+Ts-f13cANYB+Q@mail.gmail.com/
On Wed, Oct 27, 2021 at 6:39 PM Yan, Xiaoping (NSB - CN/Hangzhou) <
xiaoping.yan@nokia-sbell.com> wrote:
> Hi,
>
> I checked the counter from PF with ethtool -S, there is no counter named
> 'rx_prio0_buf_discard'
> Anyway, I checked all counters from ethtool output, there is not any
> counter reflects the dropped packets.
>
> Any suggestion from mlx maintainer? @Matan Azrad @Asaf Penso @Slava
> Ovsiienko @Raslan Darawsheh
>
> Thank you.
>
>
> Best regards
> Yan Xiaoping
>
> -----Original Message-----
> From: Martin Weiser <martin.weiser@allegro-packets.com>
> Sent: 2021年10月27日 15:54
> To: Yan, Xiaoping (NSB - CN/Hangzhou) <xiaoping.yan@nokia-sbell.com>;
> David Marchand <david.marchand@redhat.com>
> Cc: Asaf Penso <asafp@nvidia.com>; users@dpdk.org; Slava Ovsiienko <
> viacheslavo@nvidia.com>; Matan Azrad <matan@nvidia.com>; Raslan Darawsheh
> <rasland@nvidia.com>
> Subject: Re: mlx5 VF packet lost between rx_port_unicast_packets and
> rx_good_packets
>
> Hi,
>
> you may want to check the counter 'rx_prio0_buf_discard' with ethtool
> (which is not available in DPDK xstats as it seems that this counter is
> global for the card and not available per port).
> I opened a ticket a while ago regarding this issue:
> https://bugs.dpdk.org/show_bug.cgi?id=749
>
> Best regards,
> Martin
>
>
> Am 27.10.21 um 08:17 schrieb Yan, Xiaoping (NSB - CN/Hangzhou):
> > Hi,
> >
> > I tried with dpdk 20.11-3 downloaded from
> > https://fast.dpdk.org/rel/dpdk-20.11.3.tar.xz
> > Problem still exist:
> > 1. there is packet loss with 2mpps (small packet), 2. no counter for
> > the dropped packet in NIC.
> >
> > traffic generator stats: sends 41990938, receives back 41986105, lost
> > 4833 testpmd fwd stats: RX-packets: 41986110, TX-packets: 41986110
> > port xstats: rx_unicast_packets: 41990938 (all packets reached to the
> NIC port), rx_good_packets: 41986111 (some is lost), but there is not any
> counter of the lost packet.
> >
> > Here is the log:
> > [root@up-0 /]# dpdk-testpmd -l "2,3" --legacy-mem --socket-mem
> > "5000,0" -a 0000:03:06.7 -- -i --nb-cores=1 --portmask=0x1 --rxd=512
> > --txd=512
> > EAL: Detected 28 lcore(s)
> > EAL: Detected 2 NUMA nodes
> > EAL: Detected static linkage of DPDK
> > EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
> > EAL: Selected IOVA mode 'VA'
> > EAL: No available hugepages reported in hugepages-2048kB
> > EAL: Probing VFIO support...
> > EAL: Probe PCI driver: mlx5_pci (15b3:1018) device: 0000:03:06.7
> > (socket 0)
> > mlx5_pci: cannot bind mlx5 socket: Read-only file system
> > mlx5_pci: Cannot initialize socket: Read-only file system
> > EAL: No legacy callbacks, legacy socket not created Interactive-mode
> > selected
> > testpmd: create a new mbuf pool <mb_pool_0>: n=155456, size=2176,
> > socket=0
> > testpmd: preferred mempool ops selected: ring_mp_mc
> >
> > Warning! port-topology=paired and odd forward ports number, the last
> port will pair with itself.
> >
> > Configuring Port 0 (socket 0)
> > Port 0: 7A:9A:8A:A6:86:93
> > Checking link statuses...
> > Done
> > testpmd> port stop 0
> > Stopping ports...
> > Checking link statuses...
> > Done
> > testpmd> vlan set filter on 0
> > testpmd> rx_vlan add 767 0
> > testpmd> port start 0
> > Port 0: 7A:9A:8A:A6:86:93
> > Checking link statuses...
> > Done
> > testpmd> set fwd 5tswap
> > Set 5tswap packet forwarding mode
> > testpmd> start
> > 5tswap packet forwarding - ports=1 - cores=1 - streams=1 - NUMA
> > support enabled, MP allocation mode: native Logical Core 3 (socket 0)
> forwards packets on 1 streams:
> > RX P=0/Q=0 (socket 0) -> TX P=0/Q=0 (socket 0)
> > peer=02:00:00:00:00:00
> >
> > 5tswap packet forwarding packets/burst=32
> > nb forwarding cores=1 - nb forwarding ports=1
> > port 0: RX queue number: 1 Tx queue number: 1
> > Rx offloads=0x200 Tx offloads=0x0
> > RX queue: 0
> > RX desc=512 - RX free threshold=64
> > RX threshold registers: pthresh=0 hthresh=0 wthresh=0
> > RX Offloads=0x200
> > TX queue: 0
> > TX desc=512 - TX free threshold=0
> > TX threshold registers: pthresh=0 hthresh=0 wthresh=0
> > TX offloads=0x0 - TX RS bit threshold=0
> >
> > testpmd> show fwd stats all
> >
> > ---------------------- Forward statistics for port 0
> ----------------------
> > RX-packets: 41986110 RX-dropped: 0 RX-total:
> 41986110
> > TX-packets: 41986110 TX-dropped: 0 TX-total:
> 41986110
> >
> > ----------------------------------------------------------------------
> > ------
> >
> > testpmd> show port xstats 0
> > ###### NIC extended statistics for port 0
> > rx_good_packets: 41986111
> > tx_good_packets: 41986111
> > rx_good_bytes: 3106973594
> > tx_good_bytes: 3106973594
> > rx_missed_errors: 0
> > rx_errors: 0
> > tx_errors: 0
> > rx_mbuf_allocation_errors: 0
> > rx_q0_packets: 41986111
> > rx_q0_bytes: 3106973594
> > rx_q0_errors: 0
> > tx_q0_packets: 41986111
> > tx_q0_bytes: 3106973594
> > rx_wqe_errors: 0
> > rx_unicast_packets: 41990938
> > rx_unicast_bytes: 3107329412
> > tx_unicast_packets: 41986111
> > tx_unicast_bytes: 3106973594
> > rx_multicast_packets: 1
> > rx_multicast_bytes: 114
> > tx_multicast_packets: 0
> > tx_multicast_bytes: 0
> > rx_broadcast_packets: 5
> > rx_broadcast_bytes: 1710
> > tx_broadcast_packets: 0
> > tx_broadcast_bytes: 0
> > tx_phy_packets: 0
> > rx_phy_packets: 0
> > rx_phy_crc_errors: 0
> > tx_phy_bytes: 0
> > rx_phy_bytes: 0
> > rx_phy_in_range_len_errors: 0
> > rx_phy_symbol_errors: 0
> > rx_phy_discard_packets: 0
> > tx_phy_discard_packets: 0
> > tx_phy_errors: 0
> > rx_out_of_buffer: 0
> > tx_pp_missed_interrupt_errors: 0
> > tx_pp_rearm_queue_errors: 0
> > tx_pp_clock_queue_errors: 0
> > tx_pp_timestamp_past_errors: 0
> > tx_pp_timestamp_future_errors: 0
> > tx_pp_jitter: 0
> > tx_pp_wander: 0
> > tx_pp_sync_lost: 0
> > testpmd> q
> > Command not found
> > testpmd> exit
> > Command not found
> > testpmd> quit
> > Telling cores to stop...
> > Waiting for lcores to finish...
> >
> > ---------------------- Forward statistics for port 0
> ----------------------
> > RX-packets: 41986112 RX-dropped: 0 RX-total:
> 41986112
> > TX-packets: 41986112 TX-dropped: 0 TX-total:
> 41986112
> >
> > ----------------------------------------------------------------------
> > ------
> >
> > Best regards
> > Yan Xiaoping
> >
> > -----Original Message-----
> > From: David Marchand <david.marchand@redhat.com>
> > Sent: 2021年10月18日 18:45
> > To: Yan, Xiaoping (NSB - CN/Hangzhou) <xiaoping.yan@nokia-sbell.com>
> > Cc: Asaf Penso <asafp@nvidia.com>; users@dpdk.org; Slava Ovsiienko
> > <viacheslavo@nvidia.com>; Matan Azrad <matan@nvidia.com>; Raslan
> > Darawsheh <rasland@nvidia.com>
> > Subject: Re: mlx5 VF packet lost between rx_port_unicast_packets and
> > rx_good_packets
> >
> > On Mon, Oct 18, 2021 at 11:28 AM Yan, Xiaoping (NSB - CN/Hangzhou) <
> xiaoping.yan@nokia-sbell.com> wrote:
> >> I have cloned dpdk code from github
> >>
> >> [xiaopiya@fedora30 dpdk]$ git remote -v origin
> >> https://github.com/DPDK/dpdk.git (fetch) origin
> >> https://github.com/DPDK/dpdk.git (push)
> >>
> >> which tag should I use?
> >>
> >> Or do I have to download 20.11.3 from git.dpdk.org?
> >>
> >> Sorry, I don’t know the relation between https://github.com/DPDK and
> git.dpdk.org?
> > Github DPDK/dpdk repo is a replication of the main repo hosted on
> dpdk.org servers.
> >
> > The official git repos and releases tarballs are on dpdk.org servers.
> > The list of official releases tarballs is at:
> > http://core.dpdk.org/download/ The main repo git is at:
> > https://git.dpdk.org/dpdk/ The LTS/stable releases repo git is at:
> > https://git.dpdk.org/dpdk-stable/
> >
> >
> > --
> > David Marchand
> >
>
>
[-- Attachment #2: Type: text/html, Size: 12160 bytes --]
^ permalink raw reply [flat|nested] 19+ messages in thread
* RE: mlx5 VF packet lost between rx_port_unicast_packets and rx_good_packets
2021-10-28 1:58 ` Gerry Wan
@ 2021-10-29 0:44 ` Yan, Xiaoping (NSB - CN/Hangzhou)
0 siblings, 0 replies; 19+ messages in thread
From: Yan, Xiaoping (NSB - CN/Hangzhou) @ 2021-10-29 0:44 UTC (permalink / raw)
To: Gerry Wan, Asaf Penso, Slava Ovsiienko, Matan Azrad, Raslan Darawsheh
Cc: Martin Weiser, David Marchand, users
[-- Attachment #1: Type: text/plain, Size: 10131 bytes --]
Hi,
Yes, it’s always zero…
It seems dpdk-stable-20.11-3 already include this patch.
[xiaopiya@fedora30 dpdk-stable-20.11.3]$ patch -p1 < ./4-4-net-mlx5-fix-imissed-statistics.diff
patching file drivers/net/mlx5/linux/mlx5_os.c
Reversed (or previously applied) patch detected! Assume -R? [n] n
Apply anyway? [n] ^C
[xiaopiya@fedora30 dpdk-stable-20.11.3]$ grep -r "mlx5_queue_counter_id_prepare" ./
./drivers/net/mlx5/linux/mlx5_os.c:mlx5_queue_counter_id_prepare(struct rte_eth_dev *dev)
./drivers/net/mlx5/linux/mlx5_os.c: mlx5_queue_counter_id_prepare(eth_dev);
./drivers/net/mlx5/linux/mlx5_os.c.rej:+mlx5_queue_counter_id_prepare(struct rte_eth_dev *dev)
./drivers/net/mlx5/linux/mlx5_os.c.rej:+ mlx5_queue_counter_id_prepare(eth_dev);
./4-4-net-mlx5-fix-imissed-statistics.diff:+mlx5_queue_counter_id_prepare(struct rte_eth_dev *dev)
./4-4-net-mlx5-fix-imissed-statistics.diff:+ mlx5_queue_counter_id_prepare(eth_dev);
Thank you.
Best regards
Yan Xiaoping
From: Gerry Wan <gerryw@stanford.edu>
Sent: 2021年10月28日 9:58
To: Yan, Xiaoping (NSB - CN/Hangzhou) <xiaoping.yan@nokia-sbell.com>
Cc: Martin Weiser <martin.weiser@allegro-packets.com>; David Marchand <david.marchand@redhat.com>; Asaf Penso <asafp@nvidia.com>; Slava Ovsiienko <viacheslavo@nvidia.com>; Matan Azrad <matan@nvidia.com>; Raslan Darawsheh <rasland@nvidia.com>; users@dpdk.org
Subject: Re: mlx5 VF packet lost between rx_port_unicast_packets and rx_good_packets
Are the rx_missed_errors/rx_out_of_buffer counters always showing 0 no matter how fast you push your generator?
I had a similar issue with missing counters on DPDK 20.11 that was fixed on 21.05 by applying this patch:
http://patchwork.dpdk.org/project/dpdk/patch/1614249901-307665-5-git-send-email-matan@nvidia.com/
Potentially relevant thread:
https://inbox.dpdk.org/users/CAAcwi38rs2Vk9MKhRGS3kAK+=dYAnDdECT7f+Ts-f13cANYB+Q@mail.gmail.com/
On Wed, Oct 27, 2021 at 6:39 PM Yan, Xiaoping (NSB - CN/Hangzhou) <xiaoping.yan@nokia-sbell.com<mailto:xiaoping.yan@nokia-sbell.com>> wrote:
Hi,
I checked the counter from PF with ethtool -S, there is no counter named 'rx_prio0_buf_discard'
Anyway, I checked all counters from ethtool output, there is not any counter reflects the dropped packets.
Any suggestion from mlx maintainer? @Matan Azrad @Asaf Penso @Slava Ovsiienko @Raslan Darawsheh
Thank you.
Best regards
Yan Xiaoping
-----Original Message-----
From: Martin Weiser <martin.weiser@allegro-packets.com<mailto:martin.weiser@allegro-packets.com>>
Sent: 2021年10月27日 15:54
To: Yan, Xiaoping (NSB - CN/Hangzhou) <xiaoping.yan@nokia-sbell.com<mailto:xiaoping.yan@nokia-sbell.com>>; David Marchand <david.marchand@redhat.com<mailto:david.marchand@redhat.com>>
Cc: Asaf Penso <asafp@nvidia.com<mailto:asafp@nvidia.com>>; users@dpdk.org<mailto:users@dpdk.org>; Slava Ovsiienko <viacheslavo@nvidia.com<mailto:viacheslavo@nvidia.com>>; Matan Azrad <matan@nvidia.com<mailto:matan@nvidia.com>>; Raslan Darawsheh <rasland@nvidia.com<mailto:rasland@nvidia.com>>
Subject: Re: mlx5 VF packet lost between rx_port_unicast_packets and rx_good_packets
Hi,
you may want to check the counter 'rx_prio0_buf_discard' with ethtool (which is not available in DPDK xstats as it seems that this counter is global for the card and not available per port).
I opened a ticket a while ago regarding this issue:
https://bugs.dpdk.org/show_bug.cgi?id=749
Best regards,
Martin
Am 27.10.21 um 08:17 schrieb Yan, Xiaoping (NSB - CN/Hangzhou):
> Hi,
>
> I tried with dpdk 20.11-3 downloaded from
> https://fast.dpdk.org/rel/dpdk-20.11.3.tar.xz
> Problem still exist:
> 1. there is packet loss with 2mpps (small packet), 2. no counter for
> the dropped packet in NIC.
>
> traffic generator stats: sends 41990938, receives back 41986105, lost
> 4833 testpmd fwd stats: RX-packets: 41986110, TX-packets: 41986110
> port xstats: rx_unicast_packets: 41990938 (all packets reached to the NIC port), rx_good_packets: 41986111 (some is lost), but there is not any counter of the lost packet.
>
> Here is the log:
> [root@up-0 /]# dpdk-testpmd -l "2,3" --legacy-mem --socket-mem
> "5000,0" -a 0000:03:06.7 -- -i --nb-cores=1 --portmask=0x1 --rxd=512
> --txd=512
> EAL: Detected 28 lcore(s)
> EAL: Detected 2 NUMA nodes
> EAL: Detected static linkage of DPDK
> EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
> EAL: Selected IOVA mode 'VA'
> EAL: No available hugepages reported in hugepages-2048kB
> EAL: Probing VFIO support...
> EAL: Probe PCI driver: mlx5_pci (15b3:1018) device: 0000:03:06.7
> (socket 0)
> mlx5_pci: cannot bind mlx5 socket: Read-only file system
> mlx5_pci: Cannot initialize socket: Read-only file system
> EAL: No legacy callbacks, legacy socket not created Interactive-mode
> selected
> testpmd: create a new mbuf pool <mb_pool_0>: n=155456, size=2176,
> socket=0
> testpmd: preferred mempool ops selected: ring_mp_mc
>
> Warning! port-topology=paired and odd forward ports number, the last port will pair with itself.
>
> Configuring Port 0 (socket 0)
> Port 0: 7A:9A:8A:A6:86:93
> Checking link statuses...
> Done
> testpmd> port stop 0
> Stopping ports...
> Checking link statuses...
> Done
> testpmd> vlan set filter on 0
> testpmd> rx_vlan add 767 0
> testpmd> port start 0
> Port 0: 7A:9A:8A:A6:86:93
> Checking link statuses...
> Done
> testpmd> set fwd 5tswap
> Set 5tswap packet forwarding mode
> testpmd> start
> 5tswap packet forwarding - ports=1 - cores=1 - streams=1 - NUMA
> support enabled, MP allocation mode: native Logical Core 3 (socket 0) forwards packets on 1 streams:
> RX P=0/Q=0 (socket 0) -> TX P=0/Q=0 (socket 0)
> peer=02:00:00:00:00:00
>
> 5tswap packet forwarding packets/burst=32
> nb forwarding cores=1 - nb forwarding ports=1
> port 0: RX queue number: 1 Tx queue number: 1
> Rx offloads=0x200 Tx offloads=0x0
> RX queue: 0
> RX desc=512 - RX free threshold=64
> RX threshold registers: pthresh=0 hthresh=0 wthresh=0
> RX Offloads=0x200
> TX queue: 0
> TX desc=512 - TX free threshold=0
> TX threshold registers: pthresh=0 hthresh=0 wthresh=0
> TX offloads=0x0 - TX RS bit threshold=0
>
> testpmd> show fwd stats all
>
> ---------------------- Forward statistics for port 0 ----------------------
> RX-packets: 41986110 RX-dropped: 0 RX-total: 41986110
> TX-packets: 41986110 TX-dropped: 0 TX-total: 41986110
>
> ----------------------------------------------------------------------
> ------
>
> testpmd> show port xstats 0
> ###### NIC extended statistics for port 0
> rx_good_packets: 41986111
> tx_good_packets: 41986111
> rx_good_bytes: 3106973594
> tx_good_bytes: 3106973594
> rx_missed_errors: 0
> rx_errors: 0
> tx_errors: 0
> rx_mbuf_allocation_errors: 0
> rx_q0_packets: 41986111
> rx_q0_bytes: 3106973594
> rx_q0_errors: 0
> tx_q0_packets: 41986111
> tx_q0_bytes: 3106973594
> rx_wqe_errors: 0
> rx_unicast_packets: 41990938
> rx_unicast_bytes: 3107329412
> tx_unicast_packets: 41986111
> tx_unicast_bytes: 3106973594
> rx_multicast_packets: 1
> rx_multicast_bytes: 114
> tx_multicast_packets: 0
> tx_multicast_bytes: 0
> rx_broadcast_packets: 5
> rx_broadcast_bytes: 1710
> tx_broadcast_packets: 0
> tx_broadcast_bytes: 0
> tx_phy_packets: 0
> rx_phy_packets: 0
> rx_phy_crc_errors: 0
> tx_phy_bytes: 0
> rx_phy_bytes: 0
> rx_phy_in_range_len_errors: 0
> rx_phy_symbol_errors: 0
> rx_phy_discard_packets: 0
> tx_phy_discard_packets: 0
> tx_phy_errors: 0
> rx_out_of_buffer: 0
> tx_pp_missed_interrupt_errors: 0
> tx_pp_rearm_queue_errors: 0
> tx_pp_clock_queue_errors: 0
> tx_pp_timestamp_past_errors: 0
> tx_pp_timestamp_future_errors: 0
> tx_pp_jitter: 0
> tx_pp_wander: 0
> tx_pp_sync_lost: 0
> testpmd> q
> Command not found
> testpmd> exit
> Command not found
> testpmd> quit
> Telling cores to stop...
> Waiting for lcores to finish...
>
> ---------------------- Forward statistics for port 0 ----------------------
> RX-packets: 41986112 RX-dropped: 0 RX-total: 41986112
> TX-packets: 41986112 TX-dropped: 0 TX-total: 41986112
>
> ----------------------------------------------------------------------
> ------
>
> Best regards
> Yan Xiaoping
>
> -----Original Message-----
> From: David Marchand <david.marchand@redhat.com<mailto:david.marchand@redhat.com>>
> Sent: 2021年10月18日 18:45
> To: Yan, Xiaoping (NSB - CN/Hangzhou) <xiaoping.yan@nokia-sbell.com<mailto:xiaoping.yan@nokia-sbell.com>>
> Cc: Asaf Penso <asafp@nvidia.com<mailto:asafp@nvidia.com>>; users@dpdk.org<mailto:users@dpdk.org>; Slava Ovsiienko
> <viacheslavo@nvidia.com<mailto:viacheslavo@nvidia.com>>; Matan Azrad <matan@nvidia.com<mailto:matan@nvidia.com>>; Raslan
> Darawsheh <rasland@nvidia.com<mailto:rasland@nvidia.com>>
> Subject: Re: mlx5 VF packet lost between rx_port_unicast_packets and
> rx_good_packets
>
> On Mon, Oct 18, 2021 at 11:28 AM Yan, Xiaoping (NSB - CN/Hangzhou) <xiaoping.yan@nokia-sbell.com<mailto:xiaoping.yan@nokia-sbell.com>> wrote:
>> I have cloned dpdk code from github
>>
>> [xiaopiya@fedora30 dpdk]$ git remote -v origin
>> https://github.com/DPDK/dpdk.git (fetch) origin
>> https://github.com/DPDK/dpdk.git (push)
>>
>> which tag should I use?
>>
>> Or do I have to download 20.11.3 from git.dpdk.org<http://git.dpdk.org>?
>>
>> Sorry, I don’t know the relation between https://github.com/DPDK and git.dpdk.org<http://git.dpdk.org>?
> Github DPDK/dpdk repo is a replication of the main repo hosted on dpdk.org<http://dpdk.org> servers.
>
> The official git repos and releases tarballs are on dpdk.org<http://dpdk.org> servers.
> The list of official releases tarballs is at:
> http://core.dpdk.org/download/ The main repo git is at:
> https://git.dpdk.org/dpdk/ The LTS/stable releases repo git is at:
> https://git.dpdk.org/dpdk-stable/
>
>
> --
> David Marchand
>
[-- Attachment #2: Type: text/html, Size: 18023 bytes --]
^ permalink raw reply [flat|nested] 19+ messages in thread
* RE: mlx5 VF packet lost between rx_port_unicast_packets and rx_good_packets
[not found] ` <DM8PR12MB5494181EE59EA4018F71E57DCD949@DM8PR12MB5494.namprd12.prod.outlook.com>
@ 2021-11-11 13:25 ` Francesco Montorsi
0 siblings, 0 replies; 19+ messages in thread
From: Francesco Montorsi @ 2021-11-11 13:25 UTC (permalink / raw)
To: Asaf Penso, Yan, Xiaoping (NSB - CN/Hangzhou),
Gerry Wan, Slava Ovsiienko, Matan Azrad, Raslan Darawsheh
Cc: Martin Weiser, David Marchand, users
[-- Attachment #1: Type: text/plain, Size: 25532 bytes --]
Hi Asaf,
Thanks for your quick answer.
I’m trying to upgrade, will update you shortly.
However I think from reading the full email thread
https://inbox.dpdk.org/users/DM8PR12MB5494459B49353FACCACEF3C3CDB89@DM8PR12MB5494.namprd12.prod.outlook.com/t/#mc9927dd8f5f092d5042d95fa520b29765d17ddf8
that upgrade is not fixing this problem (at least it didn’t fix it for Yan FWICS)
So please check on your side if possible.
Reproducing the problem just requires overloading the receiver side with too many PPS…
Thanks a lot,
Francesco
From: Asaf Penso <asafp@nvidia.com>
Sent: Thursday, November 11, 2021 6:28 AM
To: Francesco Montorsi <francesco.montorsi@infovista.com>; Yan, Xiaoping (NSB - CN/Hangzhou) <xiaoping.yan@nokia-sbell.com>; Gerry Wan <gerryw@stanford.edu>; Slava Ovsiienko <viacheslavo@nvidia.com>; Matan Azrad <matan@nvidia.com>; Raslan Darawsheh <rasland@nvidia.com>
Cc: Martin Weiser <martin.weiser@allegro-packets.com>; David Marchand <david.marchand@redhat.com>; users@dpdk.org
Subject: Re: mlx5 VF packet lost between rx_port_unicast_packets and rx_good_packets
CAUTION: External Email : Be wary of clicking links or if this claims to be internal.
Hello Francesco,
To ensure the issue still exists, could you try the latest 19.11 LTS? 19.11.5 is a bit out dated and doesn't contain a lot of DPDK fixes.
In the meanwhile, I'll check internally about this issue and update.
Regards,
Asaf Penso
________________________________
From: Francesco Montorsi <francesco.montorsi@infovista.com<mailto:francesco.montorsi@infovista.com>>
Sent: Thursday, November 11, 2021 1:53:43 AM
To: Yan, Xiaoping (NSB - CN/Hangzhou) <xiaoping.yan@nokia-sbell.com<mailto:xiaoping.yan@nokia-sbell.com>>; Gerry Wan <gerryw@stanford.edu<mailto:gerryw@stanford.edu>>; Asaf Penso <asafp@nvidia.com<mailto:asafp@nvidia.com>>; Slava Ovsiienko <viacheslavo@nvidia.com<mailto:viacheslavo@nvidia.com>>; Matan Azrad <matan@nvidia.com<mailto:matan@nvidia.com>>; Raslan Darawsheh <rasland@nvidia.com<mailto:rasland@nvidia.com>>
Cc: Martin Weiser <martin.weiser@allegro-packets.com<mailto:martin.weiser@allegro-packets.com>>; David Marchand <david.marchand@redhat.com<mailto:david.marchand@redhat.com>>; users@dpdk.org<mailto:users@dpdk.org> <users@dpdk.org<mailto:users@dpdk.org>>
Subject: RE: mlx5 VF packet lost between rx_port_unicast_packets and rx_good_packets
Hi all,
I hit the exact same problem reported by Yan.
I’m using:
* 2 Mellanox CX5 MT28800 installed on 2 different servers, connected together
* Device FW (as reported by DPDK): 16.31.1014
* DPDK 19.11.5 (from 6WindGate actually)
I sent roughly 360M packets from one server to the other using “testpmd” (in -forward-mode=txonly).
My DPDK application on the other server is reporting the following xstats counters (:
CounterName PORT0 PORT1 TOTAL
rx_good_packets: 76727920, 0, 76727920
tx_good_packets: 0, 0, 0
rx_good_bytes: 4910586880, 0, 4910586880
tx_good_bytes: 0, 0, 0
rx_missed_errors: 0, 0, 0
rx_errors: 0, 0, 0
tx_errors: 0, 0, 0
rx_mbuf_allocation_errors: 0, 0, 0
rx_q0packets: 0, 0, 0
rx_q0bytes: 0, 0, 0
rx_q0errors: 0, 0, 0
rx_q1packets: 0, 0, 0
rx_q1bytes: 0, 0, 0
rx_q1errors: 0, 0, 0
rx_q2packets: 0, 0, 0
rx_q2bytes: 0, 0, 0
rx_q2errors: 0, 0, 0
rx_q3packets: 0, 0, 0
rx_q3bytes: 0, 0, 0
rx_q3errors: 0, 0, 0
rx_q4packets: 0, 0, 0
rx_q4bytes: 0, 0, 0
rx_q4errors: 0, 0, 0
rx_q5packets: 76727920, 0, 76727920
rx_q5bytes: 4910586880, 0, 4910586880
rx_q5errors: 0, 0, 0
rx_q6packets: 0, 0, 0
rx_q6bytes: 0, 0, 0
rx_q6errors: 0, 0, 0
rx_q7packets: 0, 0, 0
rx_q7bytes: 0, 0, 0
rx_q7errors: 0, 0, 0
rx_q8packets: 0, 0, 0
rx_q8bytes: 0, 0, 0
rx_q8errors: 0, 0, 0
rx_q9packets: 0, 0, 0
rx_q9bytes: 0, 0, 0
rx_q9errors: 0, 0, 0
rx_q10packets: 0, 0, 0
rx_q10bytes: 0, 0, 0
rx_q10errors: 0, 0, 0
rx_q11packets: 0, 0, 0
rx_q11bytes: 0, 0, 0
rx_q11errors: 0, 0, 0
tx_q0packets: 0, 0, 0
tx_q0bytes: 0, 0, 0
rx_wqe_err: 0, 0, 0
rx_port_unicast_packets: 360316064, 0, 360316064
rx_port_unicast_bytes: 23060228096, 0, 23060228096
tx_port_unicast_packets: 0, 0, 0
tx_port_unicast_bytes: 0, 0, 0
rx_port_multicast_packets: 0, 0, 0
rx_port_multicast_bytes: 0, 0, 0
tx_port_multicast_packets: 0, 0, 0
tx_port_multicast_bytes: 0, 0, 0
rx_port_broadcast_packets: 0, 0, 0
rx_port_broadcast_bytes: 0, 0, 0
tx_port_broadcast_packets: 0, 0, 0
tx_port_broadcast_bytes: 0, 0, 0
tx_packets_phy: 0, 0, 0
rx_packets_phy: 0, 0, 0
rx_crc_errors_phy: 0, 0, 0
tx_bytes_phy: 0, 0, 0
rx_bytes_phy: 0, 0, 0
rx_in_range_len_errors_phy 0, 0, 0
rx_symbol_err_phy: 0, 0, 0
rx_discards_phy: 0, 0, 0
tx_discards_phy: 0, 0, 0
tx_errors_phy: 0, 0, 0
rx_out_of_buffer: 0, 0, 0
So rx_good_packets is roughly 76M pkts, while rx_port_unicast_packets has counted correctly all 360M pkts sent by testpmd.
Of course my application layer has been able to dequeue from the DPDK port only 76M pkts so the remaining (rx_port_unicast_packets- rx_good_packets) got lost but are not reported in the “imissed” counter or “ierrors” counter of rte_eth_stats…
It would be not so easy for me to test against latest DPDK… also from what Yan has reported, the issue is still there in DPDK stable 20.11.3…
@Mellanox maintainers: any update on this issue? Is there a workaround to get the dropped packet back into the rte_eth_stats counters?
Thanks
Francesco Montorsi
From: Yan, Xiaoping (NSB - CN/Hangzhou) <xiaoping.yan@nokia-sbell.com<mailto:xiaoping.yan@nokia-sbell.com>>
Sent: Friday, October 29, 2021 2:44 AM
To: Gerry Wan <gerryw@stanford.edu<mailto:gerryw@stanford.edu>>; Asaf Penso <asafp@nvidia.com<mailto:asafp@nvidia.com>>; Slava Ovsiienko <viacheslavo@nvidia.com<mailto:viacheslavo@nvidia.com>>; Matan Azrad <matan@nvidia.com<mailto:matan@nvidia.com>>; Raslan Darawsheh <rasland@nvidia.com<mailto:rasland@nvidia.com>>
Cc: Martin Weiser <martin.weiser@allegro-packets.com<mailto:martin.weiser@allegro-packets.com>>; David Marchand <david.marchand@redhat.com<mailto:david.marchand@redhat.com>>; users@dpdk.org<mailto:users@dpdk.org>
Subject: RE: mlx5 VF packet lost between rx_port_unicast_packets and rx_good_packets
Hi,
Yes, it’s always zero…
It seems dpdk-stable-20.11-3 already include this patch.
[xiaopiya@fedora30 dpdk-stable-20.11.3]$ patch -p1 < ./4-4-net-mlx5-fix-imissed-statistics.diff
patching file drivers/net/mlx5/linux/mlx5_os.c
Reversed (or previously applied) patch detected! Assume -R? [n] n
Apply anyway? [n] ^C
[xiaopiya@fedora30 dpdk-stable-20.11.3]$ grep -r "mlx5_queue_counter_id_prepare" ./
./drivers/net/mlx5/linux/mlx5_os.c:mlx5_queue_counter_id_prepare(struct rte_eth_dev *dev)
./drivers/net/mlx5/linux/mlx5_os.c: mlx5_queue_counter_id_prepare(eth_dev);
./drivers/net/mlx5/linux/mlx5_os.c.rej:+mlx5_queue_counter_id_prepare(struct rte_eth_dev *dev)
./drivers/net/mlx5/linux/mlx5_os.c.rej:+ mlx5_queue_counter_id_prepare(eth_dev);
./4-4-net-mlx5-fix-imissed-statistics.diff:+mlx5_queue_counter_id_prepare(struct rte_eth_dev *dev)
./4-4-net-mlx5-fix-imissed-statistics.diff:+ mlx5_queue_counter_id_prepare(eth_dev);
Thank you.
Best regards
Yan Xiaoping
From: Gerry Wan <gerryw@stanford.edu<mailto:gerryw@stanford.edu>>
Sent: 2021年10月28日 9:58
To: Yan, Xiaoping (NSB - CN/Hangzhou) <xiaoping.yan@nokia-sbell.com<mailto:xiaoping.yan@nokia-sbell.com>>
Cc: Martin Weiser <martin.weiser@allegro-packets.com<mailto:martin.weiser@allegro-packets.com>>; David Marchand <david.marchand@redhat.com<mailto:david.marchand@redhat.com>>; Asaf Penso <asafp@nvidia.com<mailto:asafp@nvidia.com>>; Slava Ovsiienko <viacheslavo@nvidia.com<mailto:viacheslavo@nvidia.com>>; Matan Azrad <matan@nvidia.com<mailto:matan@nvidia.com>>; Raslan Darawsheh <rasland@nvidia.com<mailto:rasland@nvidia.com>>; users@dpdk.org<mailto:users@dpdk.org>
Subject: Re: mlx5 VF packet lost between rx_port_unicast_packets and rx_good_packets
Are the rx_missed_errors/rx_out_of_buffer counters always showing 0 no matter how fast you push your generator?
I had a similar issue with missing counters on DPDK 20.11 that was fixed on 21.05 by applying this patch:
http://patchwork.dpdk.org/project/dpdk/patch/1614249901-307665-5-git-send-email-matan@nvidia.com/<https://eur01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fpatchwork.dpdk.org%2Fproject%2Fdpdk%2Fpatch%2F1614249901-307665-5-git-send-email-matan%40nvidia.com%2F&data=04%7C01%7Cfrancesco.montorsi%40infovista.com%7Cd92ebaad8d134670b84308d9a4d3fde6%7Cc8d853de982e440492ffb4189dc94e37%7C0%7C0%7C637722055672638495%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&sdata=O%2Fd818GhhnZb3xUZAK15lZFVp73ZltqF3kO%2BIaBY%2BMA%3D&reserved=0>
Potentially relevant thread:
https://inbox.dpdk.org/users/CAAcwi38rs2Vk9MKhRGS3kAK+=dYAnDdECT7f+Ts-f13cANYB+Q@mail.gmail.com/<https://eur01.safelinks.protection.outlook.com/?url=https%3A%2F%2Finbox.dpdk.org%2Fusers%2FCAAcwi38rs2Vk9MKhRGS3kAK%2B%3DdYAnDdECT7f%2BTs-f13cANYB%2BQ%40mail.gmail.com%2F&data=04%7C01%7Cfrancesco.montorsi%40infovista.com%7Cd92ebaad8d134670b84308d9a4d3fde6%7Cc8d853de982e440492ffb4189dc94e37%7C0%7C0%7C637722055672648450%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&sdata=2qRelR3%2BUh033ssOToqg8ehwtAf1S62w1Jevnw53ovI%3D&reserved=0>
On Wed, Oct 27, 2021 at 6:39 PM Yan, Xiaoping (NSB - CN/Hangzhou) <xiaoping.yan@nokia-sbell.com<mailto:xiaoping.yan@nokia-sbell.com>> wrote:
Hi,
I checked the counter from PF with ethtool -S, there is no counter named 'rx_prio0_buf_discard'
Anyway, I checked all counters from ethtool output, there is not any counter reflects the dropped packets.
Any suggestion from mlx maintainer? @Matan Azrad @Asaf Penso @Slava Ovsiienko @Raslan Darawsheh
Thank you.
Best regards
Yan Xiaoping
-----Original Message-----
From: Martin Weiser <martin.weiser@allegro-packets.com<mailto:martin.weiser@allegro-packets.com>>
Sent: 2021年10月27日 15:54
To: Yan, Xiaoping (NSB - CN/Hangzhou) <xiaoping.yan@nokia-sbell.com<mailto:xiaoping.yan@nokia-sbell.com>>; David Marchand <david.marchand@redhat.com<mailto:david.marchand@redhat.com>>
Cc: Asaf Penso <asafp@nvidia.com<mailto:asafp@nvidia.com>>; users@dpdk.org<mailto:users@dpdk.org>; Slava Ovsiienko <viacheslavo@nvidia.com<mailto:viacheslavo@nvidia.com>>; Matan Azrad <matan@nvidia.com<mailto:matan@nvidia.com>>; Raslan Darawsheh <rasland@nvidia.com<mailto:rasland@nvidia.com>>
Subject: Re: mlx5 VF packet lost between rx_port_unicast_packets and rx_good_packets
Hi,
you may want to check the counter 'rx_prio0_buf_discard' with ethtool (which is not available in DPDK xstats as it seems that this counter is global for the card and not available per port).
I opened a ticket a while ago regarding this issue:
https://bugs.dpdk.org/show_bug.cgi?id=749<https://eur01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fbugs.dpdk.org%2Fshow_bug.cgi%3Fid%3D749&data=04%7C01%7Cfrancesco.montorsi%40infovista.com%7Cd92ebaad8d134670b84308d9a4d3fde6%7Cc8d853de982e440492ffb4189dc94e37%7C0%7C0%7C637722055672658409%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&sdata=iOTZvrC8U6kpVaO2v7GvXoMbSm8jiFZoN2El2XcS34E%3D&reserved=0>
Best regards,
Martin
Am 27.10.21 um 08:17 schrieb Yan, Xiaoping (NSB - CN/Hangzhou):
> Hi,
>
> I tried with dpdk 20.11-3 downloaded from
> https://fast.dpdk.org/rel/dpdk-20.11.3.tar.xz<https://eur01.safelinks.protection.outlook.com/?url=https%3A%2F%2Ffast.dpdk.org%2Frel%2Fdpdk-20.11.3.tar.xz&data=04%7C01%7Cfrancesco.montorsi%40infovista.com%7Cd92ebaad8d134670b84308d9a4d3fde6%7Cc8d853de982e440492ffb4189dc94e37%7C0%7C0%7C637722055672658409%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&sdata=fVghxKTm5tMmLLE9cSoK7Ga3HFLYBIsbqsdAeZsxFsM%3D&reserved=0>
> Problem still exist:
> 1. there is packet loss with 2mpps (small packet), 2. no counter for
> the dropped packet in NIC.
>
> traffic generator stats: sends 41990938, receives back 41986105, lost
> 4833 testpmd fwd stats: RX-packets: 41986110, TX-packets: 41986110
> port xstats: rx_unicast_packets: 41990938 (all packets reached to the NIC port), rx_good_packets: 41986111 (some is lost), but there is not any counter of the lost packet.
>
> Here is the log:
> [root@up-0 /]# dpdk-testpmd -l "2,3" --legacy-mem --socket-mem
> "5000,0" -a 0000:03:06.7 -- -i --nb-cores=1 --portmask=0x1 --rxd=512
> --txd=512
> EAL: Detected 28 lcore(s)
> EAL: Detected 2 NUMA nodes
> EAL: Detected static linkage of DPDK
> EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
> EAL: Selected IOVA mode 'VA'
> EAL: No available hugepages reported in hugepages-2048kB
> EAL: Probing VFIO support...
> EAL: Probe PCI driver: mlx5_pci (15b3:1018) device: 0000:03:06.7
> (socket 0)
> mlx5_pci: cannot bind mlx5 socket: Read-only file system
> mlx5_pci: Cannot initialize socket: Read-only file system
> EAL: No legacy callbacks, legacy socket not created Interactive-mode
> selected
> testpmd: create a new mbuf pool <mb_pool_0>: n=155456, size=2176,
> socket=0
> testpmd: preferred mempool ops selected: ring_mp_mc
>
> Warning! port-topology=paired and odd forward ports number, the last port will pair with itself.
>
> Configuring Port 0 (socket 0)
> Port 0: 7A:9A:8A:A6:86:93
> Checking link statuses...
> Done
> testpmd> port stop 0
> Stopping ports...
> Checking link statuses...
> Done
> testpmd> vlan set filter on 0
> testpmd> rx_vlan add 767 0
> testpmd> port start 0
> Port 0: 7A:9A:8A:A6:86:93
> Checking link statuses...
> Done
> testpmd> set fwd 5tswap
> Set 5tswap packet forwarding mode
> testpmd> start
> 5tswap packet forwarding - ports=1 - cores=1 - streams=1 - NUMA
> support enabled, MP allocation mode: native Logical Core 3 (socket 0) forwards packets on 1 streams:
> RX P=0/Q=0 (socket 0) -> TX P=0/Q=0 (socket 0)
> peer=02:00:00:00:00:00
>
> 5tswap packet forwarding packets/burst=32
> nb forwarding cores=1 - nb forwarding ports=1
> port 0: RX queue number: 1 Tx queue number: 1
> Rx offloads=0x200 Tx offloads=0x0
> RX queue: 0
> RX desc=512 - RX free threshold=64
> RX threshold registers: pthresh=0 hthresh=0 wthresh=0
> RX Offloads=0x200
> TX queue: 0
> TX desc=512 - TX free threshold=0
> TX threshold registers: pthresh=0 hthresh=0 wthresh=0
> TX offloads=0x0 - TX RS bit threshold=0
>
> testpmd> show fwd stats all
>
> ---------------------- Forward statistics for port 0 ----------------------
> RX-packets: 41986110 RX-dropped: 0 RX-total: 41986110
> TX-packets: 41986110 TX-dropped: 0 TX-total: 41986110
>
> ----------------------------------------------------------------------
> ------
>
> testpmd> show port xstats 0
> ###### NIC extended statistics for port 0
> rx_good_packets: 41986111
> tx_good_packets: 41986111
> rx_good_bytes: 3106973594
> tx_good_bytes: 3106973594
> rx_missed_errors: 0
> rx_errors: 0
> tx_errors: 0
> rx_mbuf_allocation_errors: 0
> rx_q0_packets: 41986111
> rx_q0_bytes: 3106973594
> rx_q0_errors: 0
> tx_q0_packets: 41986111
> tx_q0_bytes: 3106973594
> rx_wqe_errors: 0
> rx_unicast_packets: 41990938
> rx_unicast_bytes: 3107329412
> tx_unicast_packets: 41986111
> tx_unicast_bytes: 3106973594
> rx_multicast_packets: 1
> rx_multicast_bytes: 114
> tx_multicast_packets: 0
> tx_multicast_bytes: 0
> rx_broadcast_packets: 5
> rx_broadcast_bytes: 1710
> tx_broadcast_packets: 0
> tx_broadcast_bytes: 0
> tx_phy_packets: 0
> rx_phy_packets: 0
> rx_phy_crc_errors: 0
> tx_phy_bytes: 0
> rx_phy_bytes: 0
> rx_phy_in_range_len_errors: 0
> rx_phy_symbol_errors: 0
> rx_phy_discard_packets: 0
> tx_phy_discard_packets: 0
> tx_phy_errors: 0
> rx_out_of_buffer: 0
> tx_pp_missed_interrupt_errors: 0
> tx_pp_rearm_queue_errors: 0
> tx_pp_clock_queue_errors: 0
> tx_pp_timestamp_past_errors: 0
> tx_pp_timestamp_future_errors: 0
> tx_pp_jitter: 0
> tx_pp_wander: 0
> tx_pp_sync_lost: 0
> testpmd> q
> Command not found
> testpmd> exit
> Command not found
> testpmd> quit
> Telling cores to stop...
> Waiting for lcores to finish...
>
> ---------------------- Forward statistics for port 0 ----------------------
> RX-packets: 41986112 RX-dropped: 0 RX-total: 41986112
> TX-packets: 41986112 TX-dropped: 0 TX-total: 41986112
>
> ----------------------------------------------------------------------
> ------
>
> Best regards
> Yan Xiaoping
>
> -----Original Message-----
> From: David Marchand <david.marchand@redhat.com<mailto:david.marchand@redhat.com>>
> Sent: 2021年10月18日 18:45
> To: Yan, Xiaoping (NSB - CN/Hangzhou) <xiaoping.yan@nokia-sbell.com<mailto:xiaoping.yan@nokia-sbell.com>>
> Cc: Asaf Penso <asafp@nvidia.com<mailto:asafp@nvidia.com>>; users@dpdk.org<mailto:users@dpdk.org>; Slava Ovsiienko
> <viacheslavo@nvidia.com<mailto:viacheslavo@nvidia.com>>; Matan Azrad <matan@nvidia.com<mailto:matan@nvidia.com>>; Raslan
> Darawsheh <rasland@nvidia.com<mailto:rasland@nvidia.com>>
> Subject: Re: mlx5 VF packet lost between rx_port_unicast_packets and
> rx_good_packets
>
> On Mon, Oct 18, 2021 at 11:28 AM Yan, Xiaoping (NSB - CN/Hangzhou) <xiaoping.yan@nokia-sbell.com<mailto:xiaoping.yan@nokia-sbell.com>> wrote:
>> I have cloned dpdk code from github
>>
>> [xiaopiya@fedora30 dpdk]$ git remote -v origin
>> https://github.com/DPDK/dpdk.git<https://eur01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2FDPDK%2Fdpdk.git&data=04%7C01%7Cfrancesco.montorsi%40infovista.com%7Cd92ebaad8d134670b84308d9a4d3fde6%7Cc8d853de982e440492ffb4189dc94e37%7C0%7C0%7C637722055672668364%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&sdata=qlvhIwbxXnFWVWGTCjnC4UaVKjbF0zJxvRbCoML%2F5t0%3D&reserved=0> (fetch) origin
>> https://github.com/DPDK/dpdk.git<https://eur01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2FDPDK%2Fdpdk.git&data=04%7C01%7Cfrancesco.montorsi%40infovista.com%7Cd92ebaad8d134670b84308d9a4d3fde6%7Cc8d853de982e440492ffb4189dc94e37%7C0%7C0%7C637722055672678320%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&sdata=30njvUV8ajAT4h%2F7vyYLU4cc2pg0Z%2BqAQEXnS80QyfY%3D&reserved=0> (push)
>>
>> which tag should I use?
>>
>> Or do I have to download 20.11.3 from git.dpdk.org<https://eur01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fgit.dpdk.org%2F&data=04%7C01%7Cfrancesco.montorsi%40infovista.com%7Cd92ebaad8d134670b84308d9a4d3fde6%7Cc8d853de982e440492ffb4189dc94e37%7C0%7C0%7C637722055672678320%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&sdata=yeGXXjj25C9SC9cbuAc5C8So5Oh2y5ecp2CY%2Fy5l0nc%3D&reserved=0>?
>>
>> Sorry, I don’t know the relation between https://github.com/DPDK<https://eur01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2FDPDK&data=04%7C01%7Cfrancesco.montorsi%40infovista.com%7Cd92ebaad8d134670b84308d9a4d3fde6%7Cc8d853de982e440492ffb4189dc94e37%7C0%7C0%7C637722055672688274%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&sdata=bzTxj8L67Zb9RoB2vJo7Yf%2BgySFmEmoh4VwctjW5z%2FA%3D&reserved=0> and git.dpdk.org<https://eur01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fgit.dpdk.org%2F&data=04%7C01%7Cfrancesco.montorsi%40infovista.com%7Cd92ebaad8d134670b84308d9a4d3fde6%7Cc8d853de982e440492ffb4189dc94e37%7C0%7C0%7C637722055672698232%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&sdata=VErWzht6Xt7aZy7GySwtalqaO%2FAFvT7LB9H7GX7Z7%2Fs%3D&reserved=0>?
> Github DPDK/dpdk repo is a replication of the main repo hosted on dpdk.org<https://eur01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fdpdk.org%2F&data=04%7C01%7Cfrancesco.montorsi%40infovista.com%7Cd92ebaad8d134670b84308d9a4d3fde6%7Cc8d853de982e440492ffb4189dc94e37%7C0%7C0%7C637722055672698232%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&sdata=XxOeU31Ly5ivlW6k5I2fOPtm%2BUsk9Gluiz2%2FQno3wRM%3D&reserved=0> servers.
>
> The official git repos and releases tarballs are on dpdk.org<https://eur01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fdpdk.org%2F&data=04%7C01%7Cfrancesco.montorsi%40infovista.com%7Cd92ebaad8d134670b84308d9a4d3fde6%7Cc8d853de982e440492ffb4189dc94e37%7C0%7C0%7C637722055672708178%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&sdata=9Rl%2B7orXERAkGx3UcbYxui15ZA%2FQz7OCjppQLYDIlRk%3D&reserved=0> servers.
> The list of official releases tarballs is at:
> http://core.dpdk.org/download/<https://eur01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fcore.dpdk.org%2Fdownload%2F&data=04%7C01%7Cfrancesco.montorsi%40infovista.com%7Cd92ebaad8d134670b84308d9a4d3fde6%7Cc8d853de982e440492ffb4189dc94e37%7C0%7C0%7C637722055672718142%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&sdata=XcLFeiKA3K%2FoqFUO%2FYuxH5b8tiR7vM%2FeA4GqPwfu4Ug%3D&reserved=0> The main repo git is at:
> https://git.dpdk.org/dpdk/<https://eur01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgit.dpdk.org%2Fdpdk%2F&data=04%7C01%7Cfrancesco.montorsi%40infovista.com%7Cd92ebaad8d134670b84308d9a4d3fde6%7Cc8d853de982e440492ffb4189dc94e37%7C0%7C0%7C637722055672718142%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&sdata=qlDpNkuGwQtvxIsvLILICCxqstJiyfkzTQPJGTwHx3Q%3D&reserved=0> The LTS/stable releases repo git is at:
> https://git.dpdk.org/dpdk-stable/<https://eur01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgit.dpdk.org%2Fdpdk-stable%2F&data=04%7C01%7Cfrancesco.montorsi%40infovista.com%7Cd92ebaad8d134670b84308d9a4d3fde6%7Cc8d853de982e440492ffb4189dc94e37%7C0%7C0%7C637722055672728100%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&sdata=3HKMv%2Bj9vC9ZoyPIdme%2BOxXHFPrjqjTzu5%2BQX02F5Bk%3D&reserved=0>
>
>
> --
> David Marchand
>
[-- Attachment #2: Type: text/html, Size: 65603 bytes --]
^ permalink raw reply [flat|nested] 19+ messages in thread
* RE: mlx5 VF packet lost between rx_port_unicast_packets and rx_good_packets
@ 2021-11-10 23:53 Francesco Montorsi
[not found] ` <DM8PR12MB5494181EE59EA4018F71E57DCD949@DM8PR12MB5494.namprd12.prod.outlook.com>
0 siblings, 1 reply; 19+ messages in thread
From: Francesco Montorsi @ 2021-11-10 23:53 UTC (permalink / raw)
To: Yan, Xiaoping (NSB - CN/Hangzhou),
Gerry Wan, Asaf Penso, Slava Ovsiienko, Matan Azrad,
Raslan Darawsheh
Cc: Martin Weiser, David Marchand, users
[-- Attachment #1: Type: text/plain, Size: 16740 bytes --]
Hi all,
I hit the exact same problem reported by Yan.
I’m using:
* 2 Mellanox CX5 MT28800 installed on 2 different servers, connected together
* Device FW (as reported by DPDK): 16.31.1014
* DPDK 19.11.5 (from 6WindGate actually)
I sent roughly 360M packets from one server to the other using “testpmd” (in –forward-mode=txonly).
My DPDK application on the other server is reporting the following xstats counters (:
CounterName PORT0 PORT1 TOTAL
rx_good_packets: 76727920, 0, 76727920
tx_good_packets: 0, 0, 0
rx_good_bytes: 4910586880, 0, 4910586880
tx_good_bytes: 0, 0, 0
rx_missed_errors: 0, 0, 0
rx_errors: 0, 0, 0
tx_errors: 0, 0, 0
rx_mbuf_allocation_errors: 0, 0, 0
rx_q0packets: 0, 0, 0
rx_q0bytes: 0, 0, 0
rx_q0errors: 0, 0, 0
rx_q1packets: 0, 0, 0
rx_q1bytes: 0, 0, 0
rx_q1errors: 0, 0, 0
rx_q2packets: 0, 0, 0
rx_q2bytes: 0, 0, 0
rx_q2errors: 0, 0, 0
rx_q3packets: 0, 0, 0
rx_q3bytes: 0, 0, 0
rx_q3errors: 0, 0, 0
rx_q4packets: 0, 0, 0
rx_q4bytes: 0, 0, 0
rx_q4errors: 0, 0, 0
rx_q5packets: 76727920, 0, 76727920
rx_q5bytes: 4910586880, 0, 4910586880
rx_q5errors: 0, 0, 0
rx_q6packets: 0, 0, 0
rx_q6bytes: 0, 0, 0
rx_q6errors: 0, 0, 0
rx_q7packets: 0, 0, 0
rx_q7bytes: 0, 0, 0
rx_q7errors: 0, 0, 0
rx_q8packets: 0, 0, 0
rx_q8bytes: 0, 0, 0
rx_q8errors: 0, 0, 0
rx_q9packets: 0, 0, 0
rx_q9bytes: 0, 0, 0
rx_q9errors: 0, 0, 0
rx_q10packets: 0, 0, 0
rx_q10bytes: 0, 0, 0
rx_q10errors: 0, 0, 0
rx_q11packets: 0, 0, 0
rx_q11bytes: 0, 0, 0
rx_q11errors: 0, 0, 0
tx_q0packets: 0, 0, 0
tx_q0bytes: 0, 0, 0
rx_wqe_err: 0, 0, 0
rx_port_unicast_packets: 360316064, 0, 360316064
rx_port_unicast_bytes: 23060228096, 0, 23060228096
tx_port_unicast_packets: 0, 0, 0
tx_port_unicast_bytes: 0, 0, 0
rx_port_multicast_packets: 0, 0, 0
rx_port_multicast_bytes: 0, 0, 0
tx_port_multicast_packets: 0, 0, 0
tx_port_multicast_bytes: 0, 0, 0
rx_port_broadcast_packets: 0, 0, 0
rx_port_broadcast_bytes: 0, 0, 0
tx_port_broadcast_packets: 0, 0, 0
tx_port_broadcast_bytes: 0, 0, 0
tx_packets_phy: 0, 0, 0
rx_packets_phy: 0, 0, 0
rx_crc_errors_phy: 0, 0, 0
tx_bytes_phy: 0, 0, 0
rx_bytes_phy: 0, 0, 0
rx_in_range_len_errors_phy 0, 0, 0
rx_symbol_err_phy: 0, 0, 0
rx_discards_phy: 0, 0, 0
tx_discards_phy: 0, 0, 0
tx_errors_phy: 0, 0, 0
rx_out_of_buffer: 0, 0, 0
So rx_good_packets is roughly 76M pkts, while rx_port_unicast_packets has counted correctly all 360M pkts sent by testpmd.
Of course my application layer has been able to dequeue from the DPDK port only 76M pkts so the remaining (rx_port_unicast_packets- rx_good_packets) got lost but are not reported in the “imissed” counter or “ierrors” counter of rte_eth_stats…
It would be not so easy for me to test against latest DPDK… also from what Yan has reported, the issue is still there in DPDK stable 20.11.3…
@Mellanox maintainers: any update on this issue? Is there a workaround to get the dropped packet back into the rte_eth_stats counters?
Thanks
Francesco Montorsi
From: Yan, Xiaoping (NSB - CN/Hangzhou) <xiaoping.yan@nokia-sbell.com>
Sent: Friday, October 29, 2021 2:44 AM
To: Gerry Wan <gerryw@stanford.edu>; Asaf Penso <asafp@nvidia.com>; Slava Ovsiienko <viacheslavo@nvidia.com>; Matan Azrad <matan@nvidia.com>; Raslan Darawsheh <rasland@nvidia.com>
Cc: Martin Weiser <martin.weiser@allegro-packets.com>; David Marchand <david.marchand@redhat.com>; users@dpdk.org
Subject: RE: mlx5 VF packet lost between rx_port_unicast_packets and rx_good_packets
Hi,
Yes, it’s always zero…
It seems dpdk-stable-20.11-3 already include this patch.
[xiaopiya@fedora30 dpdk-stable-20.11.3]$ patch -p1 < ./4-4-net-mlx5-fix-imissed-statistics.diff
patching file drivers/net/mlx5/linux/mlx5_os.c
Reversed (or previously applied) patch detected! Assume -R? [n] n
Apply anyway? [n] ^C
[xiaopiya@fedora30 dpdk-stable-20.11.3]$ grep -r "mlx5_queue_counter_id_prepare" ./
./drivers/net/mlx5/linux/mlx5_os.c:mlx5_queue_counter_id_prepare(struct rte_eth_dev *dev)
./drivers/net/mlx5/linux/mlx5_os.c: mlx5_queue_counter_id_prepare(eth_dev);
./drivers/net/mlx5/linux/mlx5_os.c.rej:+mlx5_queue_counter_id_prepare(struct rte_eth_dev *dev)
./drivers/net/mlx5/linux/mlx5_os.c.rej:+ mlx5_queue_counter_id_prepare(eth_dev);
./4-4-net-mlx5-fix-imissed-statistics.diff:+mlx5_queue_counter_id_prepare(struct rte_eth_dev *dev)
./4-4-net-mlx5-fix-imissed-statistics.diff:+ mlx5_queue_counter_id_prepare(eth_dev);
Thank you.
Best regards
Yan Xiaoping
From: Gerry Wan <gerryw@stanford.edu<mailto:gerryw@stanford.edu>>
Sent: 2021年10月28日 9:58
To: Yan, Xiaoping (NSB - CN/Hangzhou) <xiaoping.yan@nokia-sbell.com<mailto:xiaoping.yan@nokia-sbell.com>>
Cc: Martin Weiser <martin.weiser@allegro-packets.com<mailto:martin.weiser@allegro-packets.com>>; David Marchand <david.marchand@redhat.com<mailto:david.marchand@redhat.com>>; Asaf Penso <asafp@nvidia.com<mailto:asafp@nvidia.com>>; Slava Ovsiienko <viacheslavo@nvidia.com<mailto:viacheslavo@nvidia.com>>; Matan Azrad <matan@nvidia.com<mailto:matan@nvidia.com>>; Raslan Darawsheh <rasland@nvidia.com<mailto:rasland@nvidia.com>>; users@dpdk.org<mailto:users@dpdk.org>
Subject: Re: mlx5 VF packet lost between rx_port_unicast_packets and rx_good_packets
Are the rx_missed_errors/rx_out_of_buffer counters always showing 0 no matter how fast you push your generator?
I had a similar issue with missing counters on DPDK 20.11 that was fixed on 21.05 by applying this patch:
http://patchwork.dpdk.org/project/dpdk/patch/1614249901-307665-5-git-send-email-matan@nvidia.com/
Potentially relevant thread:
https://inbox.dpdk.org/users/CAAcwi38rs2Vk9MKhRGS3kAK+=dYAnDdECT7f+Ts-f13cANYB+Q@mail.gmail.com/
On Wed, Oct 27, 2021 at 6:39 PM Yan, Xiaoping (NSB - CN/Hangzhou) <xiaoping.yan@nokia-sbell.com<mailto:xiaoping.yan@nokia-sbell.com>> wrote:
Hi,
I checked the counter from PF with ethtool -S, there is no counter named 'rx_prio0_buf_discard'
Anyway, I checked all counters from ethtool output, there is not any counter reflects the dropped packets.
Any suggestion from mlx maintainer? @Matan Azrad @Asaf Penso @Slava Ovsiienko @Raslan Darawsheh
Thank you.
Best regards
Yan Xiaoping
-----Original Message-----
From: Martin Weiser <martin.weiser@allegro-packets.com<mailto:martin.weiser@allegro-packets.com>>
Sent: 2021年10月27日 15:54
To: Yan, Xiaoping (NSB - CN/Hangzhou) <xiaoping.yan@nokia-sbell.com<mailto:xiaoping.yan@nokia-sbell.com>>; David Marchand <david.marchand@redhat.com<mailto:david.marchand@redhat.com>>
Cc: Asaf Penso <asafp@nvidia.com<mailto:asafp@nvidia.com>>; users@dpdk.org<mailto:users@dpdk.org>; Slava Ovsiienko <viacheslavo@nvidia.com<mailto:viacheslavo@nvidia.com>>; Matan Azrad <matan@nvidia.com<mailto:matan@nvidia.com>>; Raslan Darawsheh <rasland@nvidia.com<mailto:rasland@nvidia.com>>
Subject: Re: mlx5 VF packet lost between rx_port_unicast_packets and rx_good_packets
Hi,
you may want to check the counter 'rx_prio0_buf_discard' with ethtool (which is not available in DPDK xstats as it seems that this counter is global for the card and not available per port).
I opened a ticket a while ago regarding this issue:
https://bugs.dpdk.org/show_bug.cgi?id=749
Best regards,
Martin
Am 27.10.21 um 08:17 schrieb Yan, Xiaoping (NSB - CN/Hangzhou):
> Hi,
>
> I tried with dpdk 20.11-3 downloaded from
> https://fast.dpdk.org/rel/dpdk-20.11.3.tar.xz
> Problem still exist:
> 1. there is packet loss with 2mpps (small packet), 2. no counter for
> the dropped packet in NIC.
>
> traffic generator stats: sends 41990938, receives back 41986105, lost
> 4833 testpmd fwd stats: RX-packets: 41986110, TX-packets: 41986110
> port xstats: rx_unicast_packets: 41990938 (all packets reached to the NIC port), rx_good_packets: 41986111 (some is lost), but there is not any counter of the lost packet.
>
> Here is the log:
> [root@up-0 /]# dpdk-testpmd -l "2,3" --legacy-mem --socket-mem
> "5000,0" -a 0000:03:06.7 -- -i --nb-cores=1 --portmask=0x1 --rxd=512
> --txd=512
> EAL: Detected 28 lcore(s)
> EAL: Detected 2 NUMA nodes
> EAL: Detected static linkage of DPDK
> EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
> EAL: Selected IOVA mode 'VA'
> EAL: No available hugepages reported in hugepages-2048kB
> EAL: Probing VFIO support...
> EAL: Probe PCI driver: mlx5_pci (15b3:1018) device: 0000:03:06.7
> (socket 0)
> mlx5_pci: cannot bind mlx5 socket: Read-only file system
> mlx5_pci: Cannot initialize socket: Read-only file system
> EAL: No legacy callbacks, legacy socket not created Interactive-mode
> selected
> testpmd: create a new mbuf pool <mb_pool_0>: n=155456, size=2176,
> socket=0
> testpmd: preferred mempool ops selected: ring_mp_mc
>
> Warning! port-topology=paired and odd forward ports number, the last port will pair with itself.
>
> Configuring Port 0 (socket 0)
> Port 0: 7A:9A:8A:A6:86:93
> Checking link statuses...
> Done
> testpmd> port stop 0
> Stopping ports...
> Checking link statuses...
> Done
> testpmd> vlan set filter on 0
> testpmd> rx_vlan add 767 0
> testpmd> port start 0
> Port 0: 7A:9A:8A:A6:86:93
> Checking link statuses...
> Done
> testpmd> set fwd 5tswap
> Set 5tswap packet forwarding mode
> testpmd> start
> 5tswap packet forwarding - ports=1 - cores=1 - streams=1 - NUMA
> support enabled, MP allocation mode: native Logical Core 3 (socket 0) forwards packets on 1 streams:
> RX P=0/Q=0 (socket 0) -> TX P=0/Q=0 (socket 0)
> peer=02:00:00:00:00:00
>
> 5tswap packet forwarding packets/burst=32
> nb forwarding cores=1 - nb forwarding ports=1
> port 0: RX queue number: 1 Tx queue number: 1
> Rx offloads=0x200 Tx offloads=0x0
> RX queue: 0
> RX desc=512 - RX free threshold=64
> RX threshold registers: pthresh=0 hthresh=0 wthresh=0
> RX Offloads=0x200
> TX queue: 0
> TX desc=512 - TX free threshold=0
> TX threshold registers: pthresh=0 hthresh=0 wthresh=0
> TX offloads=0x0 - TX RS bit threshold=0
>
> testpmd> show fwd stats all
>
> ---------------------- Forward statistics for port 0 ----------------------
> RX-packets: 41986110 RX-dropped: 0 RX-total: 41986110
> TX-packets: 41986110 TX-dropped: 0 TX-total: 41986110
>
> ----------------------------------------------------------------------
> ------
>
> testpmd> show port xstats 0
> ###### NIC extended statistics for port 0
> rx_good_packets: 41986111
> tx_good_packets: 41986111
> rx_good_bytes: 3106973594
> tx_good_bytes: 3106973594
> rx_missed_errors: 0
> rx_errors: 0
> tx_errors: 0
> rx_mbuf_allocation_errors: 0
> rx_q0_packets: 41986111
> rx_q0_bytes: 3106973594
> rx_q0_errors: 0
> tx_q0_packets: 41986111
> tx_q0_bytes: 3106973594
> rx_wqe_errors: 0
> rx_unicast_packets: 41990938
> rx_unicast_bytes: 3107329412
> tx_unicast_packets: 41986111
> tx_unicast_bytes: 3106973594
> rx_multicast_packets: 1
> rx_multicast_bytes: 114
> tx_multicast_packets: 0
> tx_multicast_bytes: 0
> rx_broadcast_packets: 5
> rx_broadcast_bytes: 1710
> tx_broadcast_packets: 0
> tx_broadcast_bytes: 0
> tx_phy_packets: 0
> rx_phy_packets: 0
> rx_phy_crc_errors: 0
> tx_phy_bytes: 0
> rx_phy_bytes: 0
> rx_phy_in_range_len_errors: 0
> rx_phy_symbol_errors: 0
> rx_phy_discard_packets: 0
> tx_phy_discard_packets: 0
> tx_phy_errors: 0
> rx_out_of_buffer: 0
> tx_pp_missed_interrupt_errors: 0
> tx_pp_rearm_queue_errors: 0
> tx_pp_clock_queue_errors: 0
> tx_pp_timestamp_past_errors: 0
> tx_pp_timestamp_future_errors: 0
> tx_pp_jitter: 0
> tx_pp_wander: 0
> tx_pp_sync_lost: 0
> testpmd> q
> Command not found
> testpmd> exit
> Command not found
> testpmd> quit
> Telling cores to stop...
> Waiting for lcores to finish...
>
> ---------------------- Forward statistics for port 0 ----------------------
> RX-packets: 41986112 RX-dropped: 0 RX-total: 41986112
> TX-packets: 41986112 TX-dropped: 0 TX-total: 41986112
>
> ----------------------------------------------------------------------
> ------
>
> Best regards
> Yan Xiaoping
>
> -----Original Message-----
> From: David Marchand <david.marchand@redhat.com<mailto:david.marchand@redhat.com>>
> Sent: 2021年10月18日 18:45
> To: Yan, Xiaoping (NSB - CN/Hangzhou) <xiaoping.yan@nokia-sbell.com<mailto:xiaoping.yan@nokia-sbell.com>>
> Cc: Asaf Penso <asafp@nvidia.com<mailto:asafp@nvidia.com>>; users@dpdk.org<mailto:users@dpdk.org>; Slava Ovsiienko
> <viacheslavo@nvidia.com<mailto:viacheslavo@nvidia.com>>; Matan Azrad <matan@nvidia.com<mailto:matan@nvidia.com>>; Raslan
> Darawsheh <rasland@nvidia.com<mailto:rasland@nvidia.com>>
> Subject: Re: mlx5 VF packet lost between rx_port_unicast_packets and
> rx_good_packets
>
> On Mon, Oct 18, 2021 at 11:28 AM Yan, Xiaoping (NSB - CN/Hangzhou) <xiaoping.yan@nokia-sbell.com<mailto:xiaoping.yan@nokia-sbell.com>> wrote:
>> I have cloned dpdk code from github
>>
>> [xiaopiya@fedora30 dpdk]$ git remote -v origin
>> https://github.com/DPDK/dpdk.git (fetch) origin
>> https://github.com/DPDK/dpdk.git (push)
>>
>> which tag should I use?
>>
>> Or do I have to download 20.11.3 from git.dpdk.org<http://git.dpdk.org>?
>>
>> Sorry, I don’t know the relation between https://github.com/DPDK and git.dpdk.org<http://git.dpdk.org>?
> Github DPDK/dpdk repo is a replication of the main repo hosted on dpdk.org<http://dpdk.org> servers.
>
> The official git repos and releases tarballs are on dpdk.org<http://dpdk.org> servers.
> The list of official releases tarballs is at:
> http://core.dpdk.org/download/ The main repo git is at:
> https://git.dpdk.org/dpdk/ The LTS/stable releases repo git is at:
> https://git.dpdk.org/dpdk-stable/
>
>
> --
> David Marchand
>
[-- Attachment #2: Type: text/html, Size: 52041 bytes --]
^ permalink raw reply [flat|nested] 19+ messages in thread
end of thread, other threads:[~2021-11-15 17:39 UTC | newest]
Thread overview: 19+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-07-05 10:07 [dpdk-users] mlx5 VF packet lost between rx_port_unicast_packets and rx_good_packets Yan, Xiaoping (NSB - CN/Hangzhou)
2021-07-13 12:35 ` Asaf Penso
2021-07-26 4:52 ` Yan, Xiaoping (NSB - CN/Hangzhou)
[not found] ` <DM8PR12MB54940E42337767B960E6BD28CDA69@DM8PR12MB5494.namprd12.prod.outlook.com>
[not found] ` <16515cb8ebbb4e7d833914040fa5943f@nokia-sbell.com>
[not found] ` <df15fbaac9a644fa93c55e1cfbc9226b@nokia-sbell.com>
[not found] ` <DM8PR12MB5494A1A84DA28E47CCB7496ECDA99@DM8PR12MB5494.namprd12.prod.outlook.com>
[not found] ` <22e029bf2ae6407e910cb2dcf11d49ce@nokia-sbell.com>
[not found] ` <e8e7cf781dd444fa9648facfc824d4c9@nokia-sbell.com>
[not found] ` <4b0dd266b53541c7bc4964c29e24a0e6@nokia-sbell.com>
2021-09-30 8:05 ` Yan, Xiaoping (NSB - CN/Hangzhou)
2021-10-14 6:55 ` Asaf Penso
2021-10-14 9:33 ` Yan, Xiaoping (NSB - CN/Hangzhou)
2021-10-14 9:50 ` Asaf Penso
2021-10-14 10:15 ` Yan, Xiaoping (NSB - CN/Hangzhou)
2021-10-14 11:48 ` Asaf Penso
2021-10-18 9:28 ` Yan, Xiaoping (NSB - CN/Hangzhou)
2021-10-18 10:45 ` David Marchand
2021-10-27 6:17 ` Yan, Xiaoping (NSB - CN/Hangzhou)
2021-10-27 7:26 ` David Marchand
2021-10-27 7:54 ` Martin Weiser
2021-10-28 1:39 ` Yan, Xiaoping (NSB - CN/Hangzhou)
2021-10-28 1:58 ` Gerry Wan
2021-10-29 0:44 ` Yan, Xiaoping (NSB - CN/Hangzhou)
2021-11-10 23:53 Francesco Montorsi
[not found] ` <DM8PR12MB5494181EE59EA4018F71E57DCD949@DM8PR12MB5494.namprd12.prod.outlook.com>
2021-11-11 13:25 ` Francesco Montorsi
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).