From: bugzilla@dpdk.org
To: dev@dpdk.org
Subject: [Bug 1053] ConnectX6 / mlx5 DPDK - bad RSS/ rte_flow performance on mixed traffic ( rxq_cqe_comp_en=4 )
Date: Fri, 15 Jul 2022 16:06:58 +0000 [thread overview]
Message-ID: <bug-1053-3@http.bugs.dpdk.org/> (raw)
https://bugs.dpdk.org/show_bug.cgi?id=1053
Bug ID: 1053
Summary: ConnectX6 / mlx5 DPDK - bad RSS/ rte_flow performance
on mixed traffic ( rxq_cqe_comp_en=4 )
Product: DPDK
Version: 21.11
Hardware: x86
OS: Linux
Status: UNCONFIRMED
Severity: normal
Priority: Normal
Component: ethdev
Assignee: dev@dpdk.org
Reporter: rtox@gmx.net
Target Milestone: ---
Our team has been chasing major performance issues with ConnectX6 cards.
*Customer challenge:*
Flow stable ( symmetric RSS) load-balancing of flows to 8 worker lcores.
*Observation:*
Performance is fine up to 100Gbps using either tcp *or* udp-only traffic
profiles.
Mixed traffic drops down to 50% loss with all packets showing up as xstats:
rx_phy_discard_packets
card infos at end of email.
There appears to be a huge performance issue on mixed UDP/TCP using symmetric
load-balancing accross multiple workers.
E.g. compiling a DPDK v20.11 or newer test-pmd apps:
> sudo ./dpdk-testpmd -n 8 -l 4,6,8,10,12,14,16,18,20 -a >
> 0000:4b:00.0,rxq_cqe_comp_en=4 -a 0000:4b:00.1,rxq_cqe_comp_en=4 --
> --forward-> mode=mac --rxq=8 --txq=8 --nb-cores=8 --numa -i -a --disable-rss
and configuring:
> flow create 0 ingress pattern eth / ipv4 / tcp / end actions rss types
> ipv4-tcp > end queues 0 1 2 3 4 5 6 7 end key >
> 6D5A6D5A6D5A6D5A6D5A6D5A6D5A6D5A6D5A6D5A6D5A6D5A6D5A6D5A6D5A6D5A6D5A6D5A6D5A6D5A
> / > end
> flow create 0 ingress pattern eth / ipv4 / udp / end actions rss types
> ipv4-udp > end queues 0 1 2 3 4 5 6 7 end key >
> 6D5A6D5A6D5A6D5A6D5A6D5A6D5A6D5A6D5A6D5A6D5A6D5A6D5A6D5A6D5A6D5A6D5A6D5A6D5A6D5A
> / > end
> flow create 1 ingress pattern eth / ipv4 / tcp / end actions rss types
> ipv4-tcp > end queues 0 1 2 3 4 5 6 7 end key >
> 6D5A6D5A6D5A6D5A6D5A6D5A6D5A6D5A6D5A6D5A6D5A6D5A6D5A6D5A6D5A6D5A6D5A6D5A6D5A6D5A
> / > end
> flow create 1 ingress pattern eth / ipv4 / udp / end actions rss types
> ipv4-udp > end queues 0 1 2 3 4 5 6 7 end key >
> 6D5A6D5A6D5A6D5A6D5A6D5A6D5A6D5A6D5A6D5A6D5A6D5A6D5A6D5A6D5A6D5A6D5A6D5A6D5A6D5A
> / > end
will see *significant* packet drops at a load > 50 Gbps on any type of mixed
UDP/TCP traffic. E.g.
https://github.com/cisco-system-traffic-generator/trex-core/blob/master/scripts/cap2/sfr3.yaml
Whenever those packet drops occur, I see those in the xstats as
"rx_phy_discard_packets"
On the other hand using a TCP-or UDP-only traffic profile perfectly scales up
to 100Gbps w/o drops.
Thanks for your help!
> {code}
> ConnectX6DX
> <Devices>
> <Device pciName="0000:4b:00.0" type="ConnectX6DX" psid="DEL0000000027" >
> partNumber="0F6FXM_08P2T2_Ax">
> <Versions>
> <FW current="22.31.1014" available="N/A"/>
> <PXE current="3.6.0403" available="N/A"/>
> <UEFI current="14.24.0013" available="N/A"/>
> </Versions>
> {code}
--
You are receiving this mail because:
You are the assignee for the bug.
next reply other threads:[~2022-07-15 16:06 UTC|newest]
Thread overview: 2+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-07-15 16:06 bugzilla [this message]
2022-07-17 5:36 ` Asaf Penso
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=bug-1053-3@http.bugs.dpdk.org/ \
--to=bugzilla@dpdk.org \
--cc=dev@dpdk.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).