From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 64EB0A0032; Fri, 15 Jul 2022 18:06:59 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 410C340A87; Fri, 15 Jul 2022 18:06:59 +0200 (CEST) Received: from inbox.dpdk.org (inbox.dpdk.org [95.142.172.178]) by mails.dpdk.org (Postfix) with ESMTP id 3290E40696 for ; Fri, 15 Jul 2022 18:06:58 +0200 (CEST) Received: by inbox.dpdk.org (Postfix, from userid 33) id 22F59A00C4; Fri, 15 Jul 2022 18:06:58 +0200 (CEST) From: bugzilla@dpdk.org To: dev@dpdk.org Subject: [Bug 1053] ConnectX6 / mlx5 DPDK - bad RSS/ rte_flow performance on mixed traffic ( rxq_cqe_comp_en=4 ) Date: Fri, 15 Jul 2022 16:06:58 +0000 X-Bugzilla-Reason: AssignedTo X-Bugzilla-Type: new X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: DPDK X-Bugzilla-Component: ethdev X-Bugzilla-Version: 21.11 X-Bugzilla-Keywords: X-Bugzilla-Severity: normal X-Bugzilla-Who: rtox@gmx.net X-Bugzilla-Status: UNCONFIRMED X-Bugzilla-Resolution: X-Bugzilla-Priority: Normal X-Bugzilla-Assigned-To: dev@dpdk.org X-Bugzilla-Target-Milestone: --- X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: bug_id short_desc product version rep_platform op_sys bug_status bug_severity priority component assigned_to reporter target_milestone Message-ID: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Bugzilla-URL: http://bugs.dpdk.org/ Auto-Submitted: auto-generated X-Auto-Response-Suppress: All MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org https://bugs.dpdk.org/show_bug.cgi?id=3D1053 Bug ID: 1053 Summary: ConnectX6 / mlx5 DPDK - bad RSS/ rte_flow performance on mixed traffic ( rxq_cqe_comp_en=3D4 ) Product: DPDK Version: 21.11 Hardware: x86 OS: Linux Status: UNCONFIRMED Severity: normal Priority: Normal Component: ethdev Assignee: dev@dpdk.org Reporter: rtox@gmx.net Target Milestone: --- Our team has been chasing major performance issues with ConnectX6 cards. *Customer challenge:* Flow stable ( symmetric RSS) load-balancing of flows to 8 worker lcores. *Observation:* Performance is fine up to 100Gbps using either tcp *or* udp-only traffic profiles. Mixed traffic drops down to 50% loss with all packets showing up as xstats: rx_phy_discard_packets card infos at end of email. There appears to be a huge performance issue on mixed UDP/TCP using symmetr= ic load-balancing accross multiple workers. E.g. compiling a DPDK v20.11 or newer test-pmd apps: > sudo ./dpdk-testpmd -n 8 -l 4,6,8,10,12,14,16,18,20 -a > > 0000:4b:00.0,rxq_cqe_comp_en=3D4 -a 0000:4b:00.1,rxq_cqe_comp_en=3D4 -- > --forward-> mode=3Dmac --rxq=3D8 --txq=3D8 --nb-cores=3D8 --numa -i -a --= disable-rss and configuring: > flow create 0 ingress pattern eth / ipv4 / tcp / end actions rss types > ipv4-tcp > end queues 0 1 2 3 4 5 6 7 end key > > 6D5A6D5A6D5A6D5A6D5A6D5A6D5A6D5A6D5A6D5A6D5A6D5A6D5A6D5A6D5A6D5A6D5A6D5A6= D5A6D5A > / > end > flow create 0 ingress pattern eth / ipv4 / udp / end actions rss types > ipv4-udp > end queues 0 1 2 3 4 5 6 7 end key > > 6D5A6D5A6D5A6D5A6D5A6D5A6D5A6D5A6D5A6D5A6D5A6D5A6D5A6D5A6D5A6D5A6D5A6D5A6= D5A6D5A > / > end > flow create 1 ingress pattern eth / ipv4 / tcp / end actions rss types > ipv4-tcp > end queues 0 1 2 3 4 5 6 7 end key > > 6D5A6D5A6D5A6D5A6D5A6D5A6D5A6D5A6D5A6D5A6D5A6D5A6D5A6D5A6D5A6D5A6D5A6D5A6= D5A6D5A > / > end > flow create 1 ingress pattern eth / ipv4 / udp / end actions rss types > ipv4-udp > end queues 0 1 2 3 4 5 6 7 end key > > 6D5A6D5A6D5A6D5A6D5A6D5A6D5A6D5A6D5A6D5A6D5A6D5A6D5A6D5A6D5A6D5A6D5A6D5A6= D5A6D5A > / > end will see *significant* packet drops at a load > 50 Gbps on any type of mixed UDP/TCP traffic. E.g.=20 https://github.com/cisco-system-traffic-generator/trex-core/blob/master/scr= ipts/cap2/sfr3.yaml Whenever those packet drops occur, I see those in the xstats as "rx_phy_discard_packets" On the other hand using a TCP-or UDP-only traffic profile perfectly scales = up to 100Gbps w/o drops. Thanks for your help! > {code} > ConnectX6DX > > > partNumber=3D"0F6FXM_08P2T2_Ax"> > > > > > > {code} --=20 You are receiving this mail because: You are the assignee for the bug.=