DPDK patches and discussions
 help / color / mirror / Atom feed
* [Bug 1053] ConnectX6 / mlx5 DPDK - bad RSS/ rte_flow performance on mixed traffic (  rxq_cqe_comp_en=4 )
@ 2022-07-15 16:06 bugzilla
  2022-07-17  5:36 ` Asaf Penso
  0 siblings, 1 reply; 2+ messages in thread
From: bugzilla @ 2022-07-15 16:06 UTC (permalink / raw)
  To: dev

https://bugs.dpdk.org/show_bug.cgi?id=1053

            Bug ID: 1053
           Summary: ConnectX6 / mlx5 DPDK - bad RSS/ rte_flow performance
                    on mixed traffic (  rxq_cqe_comp_en=4 )
           Product: DPDK
           Version: 21.11
          Hardware: x86
                OS: Linux
            Status: UNCONFIRMED
          Severity: normal
          Priority: Normal
         Component: ethdev
          Assignee: dev@dpdk.org
          Reporter: rtox@gmx.net
  Target Milestone: ---

Our team has been chasing major performance issues with ConnectX6 cards.

*Customer challenge:*
Flow stable ( symmetric RSS) load-balancing of flows to 8 worker lcores.

*Observation:*
Performance is fine up to 100Gbps using either tcp *or* udp-only traffic
profiles.
Mixed traffic drops down to 50% loss with all packets showing up as xstats:
rx_phy_discard_packets

card infos at end of email.


There appears to be a huge performance issue on mixed UDP/TCP using symmetric
load-balancing accross multiple workers.
E.g. compiling a DPDK v20.11 or newer test-pmd apps:


> sudo ./dpdk-testpmd -n 8 -l 4,6,8,10,12,14,16,18,20  -a >
> 0000:4b:00.0,rxq_cqe_comp_en=4  -a 0000:4b:00.1,rxq_cqe_comp_en=4  --
> --forward-> mode=mac --rxq=8 --txq=8 --nb-cores=8 --numa -i -a --disable-rss


and configuring:


> flow create 0 ingress pattern eth / ipv4 / tcp / end actions rss types
> ipv4-tcp > end queues 0 1 2 3 4 5 6 7 end key >
> 6D5A6D5A6D5A6D5A6D5A6D5A6D5A6D5A6D5A6D5A6D5A6D5A6D5A6D5A6D5A6D5A6D5A6D5A6D5A6D5A
> / > end

> flow create 0 ingress pattern eth / ipv4 / udp / end actions rss types
> ipv4-udp > end queues 0 1 2 3 4 5 6 7 end key >
> 6D5A6D5A6D5A6D5A6D5A6D5A6D5A6D5A6D5A6D5A6D5A6D5A6D5A6D5A6D5A6D5A6D5A6D5A6D5A6D5A
> / > end

> flow create 1 ingress pattern eth / ipv4 / tcp / end actions rss types
> ipv4-tcp > end queues 0 1 2 3 4 5 6 7 end key >
> 6D5A6D5A6D5A6D5A6D5A6D5A6D5A6D5A6D5A6D5A6D5A6D5A6D5A6D5A6D5A6D5A6D5A6D5A6D5A6D5A
> / > end

> flow create 1 ingress pattern eth / ipv4 / udp / end actions rss types
> ipv4-udp > end queues 0 1 2 3 4 5 6 7 end key >
> 6D5A6D5A6D5A6D5A6D5A6D5A6D5A6D5A6D5A6D5A6D5A6D5A6D5A6D5A6D5A6D5A6D5A6D5A6D5A6D5A
> / > end

will see *significant* packet drops at a load > 50 Gbps on any type of mixed
UDP/TCP traffic. E.g. 

https://github.com/cisco-system-traffic-generator/trex-core/blob/master/scripts/cap2/sfr3.yaml
Whenever those packet drops occur, I see those in the xstats as
"rx_phy_discard_packets"


On the other hand using a TCP-or UDP-only traffic profile perfectly scales up
to 100Gbps w/o drops.


Thanks for your help!



> {code}
> ConnectX6DX



> <Devices>
>     <Device pciName="0000:4b:00.0" type="ConnectX6DX" psid="DEL0000000027" >
>     partNumber="0F6FXM_08P2T2_Ax">
>       <Versions>
>         <FW current="22.31.1014" available="N/A"/>
>         <PXE current="3.6.0403" available="N/A"/>
>         <UEFI current="14.24.0013" available="N/A"/>
>      </Versions>
> {code}

-- 
You are receiving this mail because:
You are the assignee for the bug.

^ permalink raw reply	[flat|nested] 2+ messages in thread

* RE: [Bug 1053] ConnectX6 / mlx5 DPDK - bad RSS/ rte_flow performance on mixed traffic (  rxq_cqe_comp_en=4 )
  2022-07-15 16:06 [Bug 1053] ConnectX6 / mlx5 DPDK - bad RSS/ rte_flow performance on mixed traffic ( rxq_cqe_comp_en=4 ) bugzilla
@ 2022-07-17  5:36 ` Asaf Penso
  0 siblings, 0 replies; 2+ messages in thread
From: Asaf Penso @ 2022-07-17  5:36 UTC (permalink / raw)
  To: bugzilla, dev

Hello,
Can you please share the output of xstats?

Regards,
Asaf Penso

>-----Original Message-----
>From: bugzilla@dpdk.org <bugzilla@dpdk.org>
>Sent: Friday, July 15, 2022 7:07 PM
>To: dev@dpdk.org
>Subject: [Bug 1053] ConnectX6 / mlx5 DPDK - bad RSS/ rte_flow performance
>on mixed traffic ( rxq_cqe_comp_en=4 )
>
>https://bugs.dpdk.org/show_bug.cgi?id=1053
>
>            Bug ID: 1053
>           Summary: ConnectX6 / mlx5 DPDK - bad RSS/ rte_flow performance
>                    on mixed traffic (  rxq_cqe_comp_en=4 )
>           Product: DPDK
>           Version: 21.11
>          Hardware: x86
>                OS: Linux
>            Status: UNCONFIRMED
>          Severity: normal
>          Priority: Normal
>         Component: ethdev
>          Assignee: dev@dpdk.org
>          Reporter: rtox@gmx.net
>  Target Milestone: ---
>
>Our team has been chasing major performance issues with ConnectX6 cards.
>
>*Customer challenge:*
>Flow stable ( symmetric RSS) load-balancing of flows to 8 worker lcores.
>
>*Observation:*
>Performance is fine up to 100Gbps using either tcp *or* udp-only traffic
>profiles.
>Mixed traffic drops down to 50% loss with all packets showing up as xstats:
>rx_phy_discard_packets
>
>card infos at end of email.
>
>
>There appears to be a huge performance issue on mixed UDP/TCP using
>symmetric load-balancing accross multiple workers.
>E.g. compiling a DPDK v20.11 or newer test-pmd apps:
>
>
>> sudo ./dpdk-testpmd -n 8 -l 4,6,8,10,12,14,16,18,20  -a >
>> 0000:4b:00.0,rxq_cqe_comp_en=4  -a 0000:4b:00.1,rxq_cqe_comp_en=4  --
>> --forward-> mode=mac --rxq=8 --txq=8 --nb-cores=8 --numa -i -a
>> --forward-> --disable-rss
>
>
>and configuring:
>
>
>> flow create 0 ingress pattern eth / ipv4 / tcp / end actions rss types
>> ipv4-tcp > end queues 0 1 2 3 4 5 6 7 end key >
>>
>6D5A6D5A6D5A6D5A6D5A6D5A6D5A6D5A6D5A6D5A6D5A6D5A6D5A6D5A6D5
>A6D5A6D5A6D
>> 5A6D5A6D5A
>> / > end
>
>> flow create 0 ingress pattern eth / ipv4 / udp / end actions rss types
>> ipv4-udp > end queues 0 1 2 3 4 5 6 7 end key >
>>
>6D5A6D5A6D5A6D5A6D5A6D5A6D5A6D5A6D5A6D5A6D5A6D5A6D5A6D5A6D5
>A6D5A6D5A6D
>> 5A6D5A6D5A
>> / > end
>
>> flow create 1 ingress pattern eth / ipv4 / tcp / end actions rss types
>> ipv4-tcp > end queues 0 1 2 3 4 5 6 7 end key >
>>
>6D5A6D5A6D5A6D5A6D5A6D5A6D5A6D5A6D5A6D5A6D5A6D5A6D5A6D5A6D5
>A6D5A6D5A6D
>> 5A6D5A6D5A
>> / > end
>
>> flow create 1 ingress pattern eth / ipv4 / udp / end actions rss types
>> ipv4-udp > end queues 0 1 2 3 4 5 6 7 end key >
>>
>6D5A6D5A6D5A6D5A6D5A6D5A6D5A6D5A6D5A6D5A6D5A6D5A6D5A6D5A6D5
>A6D5A6D5A6D
>> 5A6D5A6D5A
>> / > end
>
>will see *significant* packet drops at a load > 50 Gbps on any type of mixed
>UDP/TCP traffic. E.g.
>
>https://github.com/cisco-system-traffic-generator/trex-
>core/blob/master/scripts/cap2/sfr3.yaml
>Whenever those packet drops occur, I see those in the xstats as
>"rx_phy_discard_packets"
>
>
>On the other hand using a TCP-or UDP-only traffic profile perfectly scales up
>to 100Gbps w/o drops.
>
>
>Thanks for your help!
>
>
>
>> {code}
>> ConnectX6DX
>
>
>
>> <Devices>
>>     <Device pciName="0000:4b:00.0" type="ConnectX6DX"
>psid="DEL0000000027" >
>>     partNumber="0F6FXM_08P2T2_Ax">
>>       <Versions>
>>         <FW current="22.31.1014" available="N/A"/>
>>         <PXE current="3.6.0403" available="N/A"/>
>>         <UEFI current="14.24.0013" available="N/A"/>
>>      </Versions>
>> {code}
>
>--
>You are receiving this mail because:
>You are the assignee for the bug.

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2022-07-17  5:36 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-07-15 16:06 [Bug 1053] ConnectX6 / mlx5 DPDK - bad RSS/ rte_flow performance on mixed traffic ( rxq_cqe_comp_en=4 ) bugzilla
2022-07-17  5:36 ` Asaf Penso

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).