DPDK usage discussions
 help / color / mirror / Atom feed
* Performance of CX7 with 3 flow groups versus 2
@ 2024-06-28  9:53 Tony Hart
  2024-06-29 18:42 ` Tony Hart
  0 siblings, 1 reply; 2+ messages in thread
From: Tony Hart @ 2024-06-28  9:53 UTC (permalink / raw)
  To: users

[-- Attachment #1: Type: text/plain, Size: 2152 bytes --]

I'm seeing an unexpected performance drop on the CX7 when using 3 groups
(with a policer) versus 3 groups (without policer) versus (2 groups without
policer).  The performance of each configuration is  72 Gbps, 104 Gbps and
124 Gbps respectively.   So the first configuration drops to almost half
the performance of the third even though all 3 are just hairpinning packets
(the policier is used to color the packet only, no fate actions are taken
as a result).

This is on a 400G link and using SWS mode.  I know there was a similar
issue reported on this mailing list recently related to SWS versus HWS
performance, but this issue seems different.

Any thoughts welcome.

thanks
tony

These are the testpmd commands used to recreate the issue...


*Common commands:*add port meter profile trtcm_rfc4115 0 1 1000 150000000
1000 1000 1

add port meter policy 0 1 g_actions end y_actions end r_actions drop  / end

create port meter 0 1 1 1 yes 0xffff 0 g 0



*3 groups with policer*
flow create 0 ingress group 0 pattern end actions jump group 1 / end

flow create 0 ingress group 1 pattern end actions meter mtr_id 1 / jump
group 2 / end

flow create 0 ingress group 2 pattern eth / ipv4 / end actions count / rss
queues 6 7 8 9 end / end



*3 groups without policer*
flow create 0 ingress group 0 pattern end actions jump group 1 / end

flow create 0 ingress group 1 pattern end actions jump group 2 / end

flow create 0 ingress group 2 pattern eth / ipv4 / end actions count / rss
queues 6 7 8 9 end / end



*2 groups without policer*
flow create 0 ingress group 0 pattern end actions jump group 1 / end

flow create 0 ingress group 1 pattern eth / ipv4 / end actions count / rss
queues 6 7 8 9 end / end

thanks,
tony

*testpmd command line*
/dpdk-testpmd -l8-14 -a81:00.0,dv_flow_en=1 -- -i --nb-cores 6 --rxq 6
--txq 6 --port-topology loop --forward-mode=rxonly --hairpinq 4
--hairpin-mode 0x10


*Versions*
mlnx-ofa_kernel-24.04-OFED.24.04.0.6.6.1.rhel9u4.x86_64
kmod-mlnx-ofa_kernel-24.04-OFED.24.04.0.6.6.1.rhel9u4.x86_64
mlnx-ofa_kernel-devel-24.04-OFED.24.04.0.6.6.1.rhel9u4.x86_64
ofed-scripts-24.04-OFED.24.04.0.6.6.x86_64

DPDK: v24.03

[-- Attachment #2: Type: text/html, Size: 2570 bytes --]

^ permalink raw reply	[flat|nested] 2+ messages in thread

* Re: Performance of CX7 with 3 flow groups versus 2
  2024-06-28  9:53 Performance of CX7 with 3 flow groups versus 2 Tony Hart
@ 2024-06-29 18:42 ` Tony Hart
  0 siblings, 0 replies; 2+ messages in thread
From: Tony Hart @ 2024-06-29 18:42 UTC (permalink / raw)
  To: users

[-- Attachment #1: Type: text/plain, Size: 2687 bytes --]

I didn't mention that the packet size used for the tests was 68 Bytes.

Also note there is a typo in the profile setting, the PIR rate used was
1200000000 (not 150000000).  However this does not seem to make any
difference to the results.

On Fri, Jun 28, 2024 at 5:53 AM Tony Hart <tony.hart@domainhart.com> wrote:

> I'm seeing an unexpected performance drop on the CX7 when using 3 groups
> (with a policer) versus 3 groups (without policer) versus (2 groups without
> policer).  The performance of each configuration is  72 Gbps, 104 Gbps and
> 124 Gbps respectively.   So the first configuration drops to almost half
> the performance of the third even though all 3 are just hairpinning packets
> (the policier is used to color the packet only, no fate actions are taken
> as a result).
>
> This is on a 400G link and using SWS mode.  I know there was a similar
> issue reported on this mailing list recently related to SWS versus HWS
> performance, but this issue seems different.
>
> Any thoughts welcome.
>
> thanks
> tony
>
> These are the testpmd commands used to recreate the issue...
>
>
> *Common commands:*add port meter profile trtcm_rfc4115 0 1 1000 150000000
> 1000 1000 1
>
> add port meter policy 0 1 g_actions end y_actions end r_actions drop  / end
>
> create port meter 0 1 1 1 yes 0xffff 0 g 0
>
>
>
> *3 groups with policer*
> flow create 0 ingress group 0 pattern end actions jump group 1 / end
>
> flow create 0 ingress group 1 pattern end actions meter mtr_id 1 / jump
> group 2 / end
>
> flow create 0 ingress group 2 pattern eth / ipv4 / end actions count / rss
> queues 6 7 8 9 end / end
>
>
>
> *3 groups without policer*
> flow create 0 ingress group 0 pattern end actions jump group 1 / end
>
> flow create 0 ingress group 1 pattern end actions jump group 2 / end
>
> flow create 0 ingress group 2 pattern eth / ipv4 / end actions count / rss
> queues 6 7 8 9 end / end
>
>
>
> *2 groups without policer*
> flow create 0 ingress group 0 pattern end actions jump group 1 / end
>
> flow create 0 ingress group 1 pattern eth / ipv4 / end actions count / rss
> queues 6 7 8 9 end / end
>
> thanks,
> tony
>
> *testpmd command line*
> /dpdk-testpmd -l8-14 -a81:00.0,dv_flow_en=1 -- -i --nb-cores 6 --rxq 6
> --txq 6 --port-topology loop --forward-mode=rxonly --hairpinq 4
> --hairpin-mode 0x10
>
>
> *Versions*
> mlnx-ofa_kernel-24.04-OFED.24.04.0.6.6.1.rhel9u4.x86_64
> kmod-mlnx-ofa_kernel-24.04-OFED.24.04.0.6.6.1.rhel9u4.x86_64
> mlnx-ofa_kernel-devel-24.04-OFED.24.04.0.6.6.1.rhel9u4.x86_64
> ofed-scripts-24.04-OFED.24.04.0.6.6.x86_64
>
> DPDK: v24.03
>
>

-- 
tony

[-- Attachment #2: Type: text/html, Size: 3277 bytes --]

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2024-06-29 18:42 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-06-28  9:53 Performance of CX7 with 3 flow groups versus 2 Tony Hart
2024-06-29 18:42 ` Tony Hart

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).