DPDK usage discussions
 help / color / mirror / Atom feed
* RE: net/mlx5: mellanox cx5 experiences "rx_phy_discard_packets" under low bandwidth conditions.
       [not found]   ` <PH7PR12MB69057F67A8521352451073A3D0812@PH7PR12MB6905.namprd12.prod.outlook.com>
@ 2025-04-28  7:01     ` Dariusz Sosnowski
  2025-04-28  7:02       ` Bing Zhao
  0 siblings, 1 reply; 2+ messages in thread
From: Dariusz Sosnowski @ 2025-04-28  7:01 UTC (permalink / raw)
  To: Bing Zhao, Slava Ovsiienko, 韩康康, Ori Kam; +Cc: users

Adding correct mail addresses for users mailing list and to Ori Kam.

> From: Bing Zhao <bingz@nvidia.com>
> Sent: Monday, April 28, 2025 9:00 AM
> To: Slava Ovsiienko <viacheslavo@nvidia.com>; 韩康康 <961373042@qq.com>; Dariusz Sosnowski <dsosnowski@nvidia.com>; orika <orika@nivida.com>
> Cc: users <users@dpdk.com>
> Subject: RE: net/mlx5: mellanox cx5 experiences "rx_phy_discard_packets" under low bandwidth conditions.
>
> Hi Kangkang,
>
> Please take the DPDK performance report and the configurations under DOC link of dpdk.org for a reference.
> It shows how to configure the HW and how to run the testpmd / l2fwd for SCP / ZPL testing. Or searching for the DPDK testing white paper from CTC as well(中国电信DPDK测试白皮书).
> In the docs, the NUMA / PCIe / lcore affinity / queue depths will all be explained inside.
>
> From: Slava Ovsiienko <mailto:viacheslavo@nvidia.com>
> Sent: Monday, April 28, 2025 1:57 PM
> To: 韩康康 <mailto:961373042@qq.com>; Dariusz Sosnowski <mailto:dsosnowski@nvidia.com>; Bing Zhao <mailto:bingz@nvidia.com>; orika <mailto:orika@nivida.com>
> Cc: users <mailto:users@dpdk.com>
> Subject: RE: net/mlx5: mellanox cx5 experiences "rx_phy_discard_packets" under low bandwidth conditions.
>
> Hi,
>
> First of all, please, increase the number of queues and handling cores. For 100GBps link from 4 to 8 queue/cores usually are needed.
> Also, you can try combinations of multiple queues per core (2..4 queues handled by one core).
> And there are the offload options “inline” and mprq might be useful to gain the wire speed  for the small packets.
> Please see the Nvidia/Mellanox performance reports (https://core.dpdk.org/perf-reports/) for the details.
>
> With best regards,
> Slava
>
> From: 韩康康 <mailto:961373042@qq.com>
> Sent: Friday, April 25, 2025 1:36 PM
> To: Dariusz Sosnowski <mailto:dsosnowski@nvidia.com>; Slava Ovsiienko <mailto:viacheslavo@nvidia.com>; Bing Zhao <mailto:bingz@nvidia.com>; orika <mailto:orika@nivida.com>
> Cc: users <mailto:users@dpdk.com>
> Subject: net/mlx5: mellanox cx5 experiences "rx_phy_discard_packets" under low bandwidth conditions.
>
>
> You don't often get email from mailto:961373042@qq.com. https://aka.ms/LearnAboutSenderIdentification
>
> Hi all,
> I am using dpdk-testpmd and pkt generator to test bandwith according to RFC2544.
> However, I observed that CX5 rx_phy_discard_packets at low bandwith, resulting in abnormal measurements under zero packet loss conditions.
>
> dpdk version: dpdk-21.11.5
> ethtool -i enp161s0f1np1:
>     driver: mlx5_core
>     version: 5.8-6.0.4
>     firmware-version: 16.32.1010
> Hardware: AMD EPYC 7742 64-core Processor
> dpdk-testpmd:
> dpdk-testpmd -l 96-111 -n 4 -a 0000:a1:00.1 -- -i --rxq=1  --txq=1 --txd=8192 --rxd=8192 --nb-cores=1  --burst=128 -a --mbcache=512  --rss-udp
> test result:
> frame size(bytes)    offerd Load(%)    packet loss rate
> 128                         15.69                   0.000211
> 256                         14.148                 0.0004
> 512                         14.148                 0.00008
> 1518                        14.92                  0.00099
>
> i'd like to ask, is this an issue with CX5 NIC? How can i debug it to eliminate the packet drops?
>
> 韩康康
> mailto:961373042@qq.com


^ permalink raw reply	[flat|nested] 2+ messages in thread

* RE: net/mlx5: mellanox cx5 experiences "rx_phy_discard_packets" under low bandwidth conditions.
  2025-04-28  7:01     ` net/mlx5: mellanox cx5 experiences "rx_phy_discard_packets" under low bandwidth conditions Dariusz Sosnowski
@ 2025-04-28  7:02       ` Bing Zhao
  0 siblings, 0 replies; 2+ messages in thread
From: Bing Zhao @ 2025-04-28  7:02 UTC (permalink / raw)
  To: Dariusz Sosnowski, Slava Ovsiienko, 韩康康, Ori Kam
  Cc: users

Thanks a lot. I didn't notice that the emails were incorrect in the original email.

> -----Original Message-----
> From: Dariusz Sosnowski <dsosnowski@nvidia.com>
> Sent: Monday, April 28, 2025 3:01 PM
> To: Bing Zhao <bingz@nvidia.com>; Slava Ovsiienko
> <viacheslavo@nvidia.com>; 韩康康 <961373042@qq.com>; Ori Kam
> <orika@nvidia.com>
> Cc: users@dpdk.org
> Subject: RE: net/mlx5: mellanox cx5 experiences "rx_phy_discard_packets"
> under low bandwidth conditions.
> 
> Adding correct mail addresses for users mailing list and to Ori Kam.
> 
> > From: Bing Zhao <bingz@nvidia.com>
> > Sent: Monday, April 28, 2025 9:00 AM
> > To: Slava Ovsiienko <viacheslavo@nvidia.com>; 韩康康 <961373042@qq.com>;
> > Dariusz Sosnowski <dsosnowski@nvidia.com>; orika <orika@nivida.com>
> > Cc: users <users@dpdk.com>
> > Subject: RE: net/mlx5: mellanox cx5 experiences "rx_phy_discard_packets"
> under low bandwidth conditions.
> >
> > Hi Kangkang,
> >
> > Please take the DPDK performance report and the configurations under DOC
> link of dpdk.org for a reference.
> > It shows how to configure the HW and how to run the testpmd / l2fwd for
> SCP / ZPL testing. Or searching for the DPDK testing white paper from CTC
> as well(中国电信DPDK测试白皮书).
> > In the docs, the NUMA / PCIe / lcore affinity / queue depths will all be
> explained inside.
> >
> > From: Slava Ovsiienko <mailto:viacheslavo@nvidia.com>
> > Sent: Monday, April 28, 2025 1:57 PM
> > To: 韩康康 <mailto:961373042@qq.com>; Dariusz Sosnowski
> > <mailto:dsosnowski@nvidia.com>; Bing Zhao <mailto:bingz@nvidia.com>;
> > orika <mailto:orika@nivida.com>
> > Cc: users <mailto:users@dpdk.com>
> > Subject: RE: net/mlx5: mellanox cx5 experiences "rx_phy_discard_packets"
> under low bandwidth conditions.
> >
> > Hi,
> >
> > First of all, please, increase the number of queues and handling cores.
> For 100GBps link from 4 to 8 queue/cores usually are needed.
> > Also, you can try combinations of multiple queues per core (2..4 queues
> handled by one core).
> > And there are the offload options “inline” and mprq might be useful to
> gain the wire speed  for the small packets.
> > Please see the Nvidia/Mellanox performance reports
> (https://core.dpdk.org/perf-reports/) for the details.
> >
> > With best regards,
> > Slava
> >
> > From: 韩康康 <mailto:961373042@qq.com>
> > Sent: Friday, April 25, 2025 1:36 PM
> > To: Dariusz Sosnowski <mailto:dsosnowski@nvidia.com>; Slava Ovsiienko
> > <mailto:viacheslavo@nvidia.com>; Bing Zhao <mailto:bingz@nvidia.com>;
> > orika <mailto:orika@nivida.com>
> > Cc: users <mailto:users@dpdk.com>
> > Subject: net/mlx5: mellanox cx5 experiences "rx_phy_discard_packets"
> under low bandwidth conditions.
> >
> >
> > You don't often get email from mailto:961373042@qq.com.
> > https://aka.ms/LearnAboutSenderIdentification
> >
> > Hi all,
> > I am using dpdk-testpmd and pkt generator to test bandwith according to
> RFC2544.
> > However, I observed that CX5 rx_phy_discard_packets at low bandwith,
> resulting in abnormal measurements under zero packet loss conditions.
> >
> > dpdk version: dpdk-21.11.5
> > ethtool -i enp161s0f1np1:
> >     driver: mlx5_core
> >     version: 5.8-6.0.4
> >     firmware-version: 16.32.1010
> > Hardware: AMD EPYC 7742 64-core Processor
> > dpdk-testpmd:
> > dpdk-testpmd -l 96-111 -n 4 -a 0000:a1:00.1 -- -i --rxq=1  --txq=1
> > --txd=8192 --rxd=8192 --nb-cores=1  --burst=128 -a --mbcache=512  --rss-
> udp test result:
> > frame size(bytes)    offerd Load(%)    packet loss rate
> > 128                         15.69                   0.000211
> > 256                         14.148                 0.0004
> > 512                         14.148                 0.00008
> > 1518                        14.92                  0.00099
> >
> > i'd like to ask, is this an issue with CX5 NIC? How can i debug it to
> eliminate the packet drops?
> >
> > 韩康康
> > mailto:961373042@qq.com
> 


^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2025-04-28  7:02 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <tencent_32F472A487C557D466BA2338E4FDF710A609@qq.com>
     [not found] ` <MN6PR12MB85678369DDCB1509C887F66ADF812@MN6PR12MB8567.namprd12.prod.outlook.com>
     [not found]   ` <PH7PR12MB69057F67A8521352451073A3D0812@PH7PR12MB6905.namprd12.prod.outlook.com>
2025-04-28  7:01     ` net/mlx5: mellanox cx5 experiences "rx_phy_discard_packets" under low bandwidth conditions Dariusz Sosnowski
2025-04-28  7:02       ` Bing Zhao

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).