* [dpdk-users] [ConnectX-5 MCX515A-CCAT / MCX516A-CCAT] Can only generate 53Gb/s with 64B packets
@ 2020-03-11 23:40 Yan Lei
2020-03-11 23:44 ` Yan Lei
2020-03-12 6:45 ` sachin gupta
0 siblings, 2 replies; 6+ messages in thread
From: Yan Lei @ 2020-03-11 23:40 UTC (permalink / raw)
To: users
Hi, I am currently struggling in getting more than 53Gb/s with 64B packets on both of the MCX515A-CCAT and MCX516A-CCAT adapter when running a DPDK app that generates and transmits packets. WIth 256B packets I can get 98Gb/s.
Has anyone saw the same performance on these NICs? I checked the perf. report on https://core.dpdk.org/perf-reports/ but there are no numbers of these NICs.
Is this inherent limitation of these NICs (only reach 100Gb/s with larger packets)? If not, which firmware/driver/DPDK/system configurations could I tune to get 100Gb/s with 64B packets? My setup is as following: - CPU: E5-2697 v3 (14 cores, SMT disabled, CPU frequency fixed @ 2.6 GHz) - NIC: Mellanox MCX515A-CCAT / MCX516A-CCAT (Using only one port for TX, installed on PCIe Gen3 x16) - DPDK: 19.05 - RDMA-CORE: v28.0 - Kernel: 5.3.0 - OS: Ubuntu 18.04 - Firmware: 16.26.1040 I measured the TX rate with DPDK's testpmd: $ ./testpmd -l 3-13 -n 4 -w 02:00.0 -- -i --port-topology=chained --nb-ports=1 --rxq=10 --txq=10 --nb-cores=10 --burst=128 --rxd=512 --txd=512 --mbcache=512 --forward-mode=txonly So 10 cores generating and transmits 64B packets on 10 NIC queues. Your feedbacks will be much appreciated.
Thanks, Lei
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [dpdk-users] [ConnectX-5 MCX515A-CCAT / MCX516A-CCAT] Can only generate 53Gb/s with 64B packets
2020-03-11 23:40 [dpdk-users] [ConnectX-5 MCX515A-CCAT / MCX516A-CCAT] Can only generate 53Gb/s with 64B packets Yan Lei
@ 2020-03-11 23:44 ` Yan Lei
2020-03-12 6:45 ` sachin gupta
1 sibling, 0 replies; 6+ messages in thread
From: Yan Lei @ 2020-03-11 23:44 UTC (permalink / raw)
To: users
The previous Email has format problem, sorry for that updated version below!
Hi,
I am currently struggling in getting more than 53Gb/s with 64B packets on both of the MCX515A-CCAT and MCX516A-CCAT adapter when running a DPDK app that generates and transmits packets. With 256B packets I can get 98Gb/s.
Has anyone saw the same performance on these NICs? I checked the perf. report on https://core.dpdk.org/perf-reports/ but there are no numbers of these NICs.
Is this inherent limitation of these NICs (only reach 100Gb/s with larger packets)? If not, which firmware/driver/DPDK/system configurations could I tune to get 100Gb/s with 64B packets?
My setup is as following:
- CPU: E5-2697 v3 (14 cores, SMT disabled, CPU frequency fixed @ 2.6 GHz)
- NIC: Mellanox MCX515A-CCAT / MCX516A-CCAT (Using only one port for TX, installed on PCIe Gen3 x16)
- DPDK: 19.05
- RDMA-CORE: v28.0
- Kernel: 5.3.0
- OS: Ubuntu 18.04
- Firmware: 16.26.1040
I measured the TX rate with DPDK's testpmd:
$ ./testpmd -l 3-13 -n 4 -w 02:00.0 -- -i --port-topology=chained --nb-ports=1 --rxq=10 --txq=10 --nb-cores=10 --burst=128 --rxd=512 --txd=512 --mbcache=512 --forward-mode=txonly So 10 cores generating and transmits 64B packets on 10 NIC queues.
Your feedbacks will be much appreciated.
Thanks,
Lei
________________________________
From: users <users-bounces@dpdk.org> on behalf of Yan Lei <l.yan@epfl.ch>
Sent: Thursday, March 12, 2020 12:40:03 AM
To: users@dpdk.org
Subject: [dpdk-users] [ConnectX-5 MCX515A-CCAT / MCX516A-CCAT] Can only generate 53Gb/s with 64B packets
Hi, I am currently struggling in getting more than 53Gb/s with 64B packets on both of the MCX515A-CCAT and MCX516A-CCAT adapter when running a DPDK app that generates and transmits packets. WIth 256B packets I can get 98Gb/s.
Has anyone saw the same performance on these NICs? I checked the perf. report on https://core.dpdk.org/perf-reports/ but there are no numbers of these NICs.
Is this inherent limitation of these NICs (only reach 100Gb/s with larger packets)? If not, which firmware/driver/DPDK/system configurations could I tune to get 100Gb/s with 64B packets? My setup is as following: - CPU: E5-2697 v3 (14 cores, SMT disabled, CPU frequency fixed @ 2.6 GHz) - NIC: Mellanox MCX515A-CCAT / MCX516A-CCAT (Using only one port for TX, installed on PCIe Gen3 x16) - DPDK: 19.05 - RDMA-CORE: v28.0 - Kernel: 5.3.0 - OS: Ubuntu 18.04 - Firmware: 16.26.1040 I measured the TX rate with DPDK's testpmd: $ ./testpmd -l 3-13 -n 4 -w 02:00.0 -- -i --port-topology=chained --nb-ports=1 --rxq=10 --txq=10 --nb-cores=10 --burst=128 --rxd=512 --txd=512 --mbcache=512 --forward-mode=txonly So 10 cores generating and transmits 64B packets on 10 NIC queues. Your feedbacks will be much appreciated.
Thanks, Lei
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [dpdk-users] [ConnectX-5 MCX515A-CCAT / MCX516A-CCAT] Can only generate 53Gb/s with 64B packets
2020-03-11 23:40 [dpdk-users] [ConnectX-5 MCX515A-CCAT / MCX516A-CCAT] Can only generate 53Gb/s with 64B packets Yan Lei
2020-03-11 23:44 ` Yan Lei
@ 2020-03-12 6:45 ` sachin gupta
2020-03-14 18:37 ` Yan Lei
1 sibling, 1 reply; 6+ messages in thread
From: sachin gupta @ 2020-03-12 6:45 UTC (permalink / raw)
To: users, Yan Lei
Hi Lei,
The smaller the Packet size, the more the number of Packets per second. I believe this is the inherent problem in all systems, even the ones which have proprietary hardware.
In general applications which uses such small packets are rare and you will see a mix of traffic in the system.
RegardsSachin
On Thursday, March 12, 2020, 5:10:14 AM GMT+5:30, Yan Lei <l.yan@epfl.ch> wrote:
Hi, I am currently struggling in getting more than 53Gb/s with 64B packets on both of the MCX515A-CCAT and MCX516A-CCAT adapter when running a DPDK app that generates and transmits packets. WIth 256B packets I can get 98Gb/s.
Has anyone saw the same performance on these NICs? I checked the perf. report on https://core.dpdk.org/perf-reports/ but there are no numbers of these NICs.
Is this inherent limitation of these NICs (only reach 100Gb/s with larger packets)? If not, which firmware/driver/DPDK/system configurations could I tune to get 100Gb/s with 64B packets? My setup is as following: - CPU: E5-2697 v3 (14 cores, SMT disabled, CPU frequency fixed @ 2.6 GHz) - NIC: Mellanox MCX515A-CCAT / MCX516A-CCAT (Using only one port for TX, installed on PCIe Gen3 x16) - DPDK: 19.05 - RDMA-CORE: v28.0 - Kernel: 5.3.0 - OS: Ubuntu 18.04 - Firmware: 16.26.1040 I measured the TX rate with DPDK's testpmd: $ ./testpmd -l 3-13 -n 4 -w 02:00.0 -- -i --port-topology=chained --nb-ports=1 --rxq=10 --txq=10 --nb-cores=10 --burst=128 --rxd=512 --txd=512 --mbcache=512 --forward-mode=txonly So 10 cores generating and transmits 64B packets on 10 NIC queues. Your feedbacks will be much appreciated.
Thanks, Lei
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [dpdk-users] [ConnectX-5 MCX515A-CCAT / MCX516A-CCAT] Can only generate 53Gb/s with 64B packets
2020-03-12 6:45 ` sachin gupta
@ 2020-03-14 18:37 ` Yan Lei
2020-03-16 7:44 ` sachin gupta
0 siblings, 1 reply; 6+ messages in thread
From: Yan Lei @ 2020-03-14 18:37 UTC (permalink / raw)
To: sachin gupta, users
Hi Sachin,
Thanks a lot for the answer. The issue is resolved, was able to get 98Gb/s with 64B packets after set pci maxReadRequest to 1024 and turn off NIC flow control. These optimization settings are actually posted in the mlx5 PMD guide, my bad to have ignored them...
Cheers,
Lei
________________________________
From: sachin gupta <sachingg@yahoo.com>
Sent: Thursday, March 12, 2020 7:45:31 AM
To: users@dpdk.org; Yan Lei
Subject: Re: [dpdk-users] [ConnectX-5 MCX515A-CCAT / MCX516A-CCAT] Can only generate 53Gb/s with 64B packets
Hi Lei,
The smaller the Packet size, the more the number of Packets per second. I believe this is the inherent problem in all systems, even the ones which have proprietary hardware.
In general applications which uses such small packets are rare and you will see a mix of traffic in the system.
Regards
Sachin
On Thursday, March 12, 2020, 5:10:14 AM GMT+5:30, Yan Lei <l.yan@epfl.ch> wrote:
Hi, I am currently struggling in getting more than 53Gb/s with 64B packets on both of the MCX515A-CCAT and MCX516A-CCAT adapter when running a DPDK app that generates and transmits packets. WIth 256B packets I can get 98Gb/s.
Has anyone saw the same performance on these NICs? I checked the perf. report on https://core.dpdk.org/perf-reports/ but there are no numbers of these NICs.
Is this inherent limitation of these NICs (only reach 100Gb/s with larger packets)? If not, which firmware/driver/DPDK/system configurations could I tune to get 100Gb/s with 64B packets? My setup is as following: - CPU: E5-2697 v3 (14 cores, SMT disabled, CPU frequency fixed @ 2.6 GHz) - NIC: Mellanox MCX515A-CCAT / MCX516A-CCAT (Using only one port for TX, installed on PCIe Gen3 x16) - DPDK: 19.05 - RDMA-CORE: v28.0 - Kernel: 5.3.0 - OS: Ubuntu 18.04 - Firmware: 16.26.1040 I measured the TX rate with DPDK's testpmd: $ ./testpmd -l 3-13 -n 4 -w 02:00.0 -- -i --port-topology=chained --nb-ports=1 --rxq=10 --txq=10 --nb-cores=10 --burst=128 --rxd=512 --txd=512 --mbcache=512 --forward-mode=txonly So 10 cores generating and transmits 64B packets on 10 NIC queues. Your feedbacks will be much appreciated.
Thanks, Lei
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [dpdk-users] [ConnectX-5 MCX515A-CCAT / MCX516A-CCAT] Can only generate 53Gb/s with 64B packets
2020-03-14 18:37 ` Yan Lei
@ 2020-03-16 7:44 ` sachin gupta
2020-03-16 10:44 ` Yan Lei
0 siblings, 1 reply; 6+ messages in thread
From: sachin gupta @ 2020-03-16 7:44 UTC (permalink / raw)
To: Yan Lei, users
Cool YanThanks for letting me know as as well. Can you also let me know the link capacity
Sachin
Sent from Yahoo Mail for iPhone
On Sunday, March 15, 2020, 12:07 AM, Yan Lei <l.yan@epfl.ch> wrote:
#yiv6938110405 #yiv6938110405 -- P {margin-top:0;margin-bottom:0;}#yiv6938110405 Hi Sachin,
Thanks a lot for the answer. The issue is resolved, was able to get 98Gb/s with 64B packets after set pci maxReadRequest to 1024 and turn off NIC flow control. These optimization settings are actually posted in the mlx5 PMD guide, my bad to have ignored them...
Cheers,LeiFrom: sachin gupta <sachingg@yahoo.com>
Sent: Thursday, March 12, 2020 7:45:31 AM
To: users@dpdk.org; Yan Lei
Subject: Re: [dpdk-users] [ConnectX-5 MCX515A-CCAT / MCX516A-CCAT] Can only generate 53Gb/s with 64B packets Hi Lei,
The smaller the Packet size, the more the number of Packets per second. I believe this is the inherent problem in all systems, even the ones which have proprietary hardware.
In general applications which uses such small packets are rare and you will see a mix of traffic in the system.
RegardsSachin
On Thursday, March 12, 2020, 5:10:14 AM GMT+5:30, Yan Lei <l.yan@epfl.ch> wrote:
Hi, I am currently struggling in getting more than 53Gb/s with 64B packets on both of the MCX515A-CCAT and MCX516A-CCAT adapter when running a DPDK app that generates and transmits packets. WIth 256B packets I can get 98Gb/s.
Has anyone saw the same performance on these NICs? I checked the perf. report onhttps://core.dpdk.org/perf-reports/ but there are no numbers of these NICs.
Is this inherent limitation of these NICs (only reach 100Gb/s with larger packets)? If not, which firmware/driver/DPDK/system configurations could I tune to get 100Gb/s with 64B packets? My setup is as following: - CPU: E5-2697 v3 (14 cores, SMT disabled, CPU frequency fixed @ 2.6 GHz) - NIC: Mellanox MCX515A-CCAT / MCX516A-CCAT (Using only one port for TX, installed on PCIe Gen3 x16) - DPDK: 19.05 - RDMA-CORE: v28.0 - Kernel: 5.3.0 - OS: Ubuntu 18.04 - Firmware: 16.26.1040 I measured the TX rate with DPDK's testpmd: $ ./testpmd -l 3-13 -n 4 -w 02:00.0 -- -i --port-topology=chained --nb-ports=1 --rxq=10 --txq=10 --nb-cores=10 --burst=128 --rxd=512 --txd=512 --mbcache=512 --forward-mode=txonly So 10 cores generating and transmits 64B packets on 10 NIC queues. Your feedbacks will be much appreciated.
Thanks, Lei
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [dpdk-users] [ConnectX-5 MCX515A-CCAT / MCX516A-CCAT] Can only generate 53Gb/s with 64B packets
2020-03-16 7:44 ` sachin gupta
@ 2020-03-16 10:44 ` Yan Lei
0 siblings, 0 replies; 6+ messages in thread
From: Yan Lei @ 2020-03-16 10:44 UTC (permalink / raw)
To: sachin gupta, users
Hi Sachin,
By link capacity do you mean the bandwidth of the NIC port? If so, the link capacity is 100Gb/s.
Cheers,
Lei
________________________________
From: sachin gupta <sachingg@yahoo.com>
Sent: Monday, March 16, 2020 8:44:53 AM
To: Yan Lei; users@dpdk.org
Subject: Re: [dpdk-users] [ConnectX-5 MCX515A-CCAT / MCX516A-CCAT] Can only generate 53Gb/s with 64B packets
Cool Yan
Thanks for letting me know as as well. Can you also let me know the link capacity
Sachin
Sent from Yahoo Mail for iPhone<https://overview.mail.yahoo.com/?.src=iOS>
On Sunday, March 15, 2020, 12:07 AM, Yan Lei <l.yan@epfl.ch> wrote:
Hi Sachin,
Thanks a lot for the answer. The issue is resolved, was able to get 98Gb/s with 64B packets after set pci maxReadRequest to 1024 and turn off NIC flow control. These optimization settings are actually posted in the mlx5 PMD guide, my bad to have ignored them...
Cheers,
Lei
________________________________
From: sachin gupta <sachingg@yahoo.com>
Sent: Thursday, March 12, 2020 7:45:31 AM
To: users@dpdk.org; Yan Lei
Subject: Re: [dpdk-users] [ConnectX-5 MCX515A-CCAT / MCX516A-CCAT] Can only generate 53Gb/s with 64B packets
Hi Lei,
The smaller the Packet size, the more the number of Packets per second. I believe this is the inherent problem in all systems, even the ones which have proprietary hardware.
In general applications which uses such small packets are rare and you will see a mix of traffic in the system.
Regards
Sachin
On Thursday, March 12, 2020, 5:10:14 AM GMT+5:30, Yan Lei <l.yan@epfl.ch> wrote:
Hi, I am currently struggling in getting more than 53Gb/s with 64B packets on both of the MCX515A-CCAT and MCX516A-CCAT adapter when running a DPDK app that generates and transmits packets. WIth 256B packets I can get 98Gb/s.
Has anyone saw the same performance on these NICs? I checked the perf. report on https://core.dpdk.org/perf-reports/ but there are no numbers of these NICs.
Is this inherent limitation of these NICs (only reach 100Gb/s with larger packets)? If not, which firmware/driver/DPDK/system configurations could I tune to get 100Gb/s with 64B packets? My setup is as following: - CPU: E5-2697 v3 (14 cores, SMT disabled, CPU frequency fixed @ 2.6 GHz) - NIC: Mellanox MCX515A-CCAT / MCX516A-CCAT (Using only one port for TX, installed on PCIe Gen3 x16) - DPDK: 19.05 - RDMA-CORE: v28.0 - Kernel: 5.3.0 - OS: Ubuntu 18.04 - Firmware: 16.26.1040 I measured the TX rate with DPDK's testpmd: $ ./testpmd -l 3-13 -n 4 -w 02:00.0 -- -i --port-topology=chained --nb-ports=1 --rxq=10 --txq=10 --nb-cores=10 --burst=128 --rxd=512 --txd=512 --mbcache=512 --forward-mode=txonly So 10 cores generating and transmits 64B packets on 10 NIC queues. Your feedbacks will be much appreciated.
Thanks, Lei
^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2020-03-16 10:44 UTC | newest]
Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-03-11 23:40 [dpdk-users] [ConnectX-5 MCX515A-CCAT / MCX516A-CCAT] Can only generate 53Gb/s with 64B packets Yan Lei
2020-03-11 23:44 ` Yan Lei
2020-03-12 6:45 ` sachin gupta
2020-03-14 18:37 ` Yan Lei
2020-03-16 7:44 ` sachin gupta
2020-03-16 10:44 ` Yan Lei
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).