DPDK usage discussions
 help / color / mirror / Atom feed
* Flow rules performance with ConnectX-6 Dx
@ 2022-03-11 11:10 Дмитрий Степанов
  2022-03-11 11:37 ` Dmitry Kozlyuk
  0 siblings, 1 reply; 5+ messages in thread
From: Дмитрий Степанов @ 2022-03-11 11:10 UTC (permalink / raw)
  To: users

[-- Attachment #1: Type: text/plain, Size: 2374 bytes --]

Hi, folks!

I'm using Mellanox ConnectX-6 Dx EN adapter card (100GbE; Dual-port QSFP56;
PCIe 4.0/3.0 x16) with DPDK 21.11 on Ubuntu 20.04

I want to drop particular packets within HW NIC's using rte_flow.
The flow configuration is rather straightforward: RTE_FLOW_ACTION_TYPE_DROP
as a single action and RTE_FLOW_ITEM_TYPE_ETH/RTE_FLOW_ITEM_TYPE_IPV4 as
pattern items (I used flow_filtering DPDK example as a starting point).

I'm using following IPv4 pattern for rte_flow drop rule - 0.0.0.0/0 as src
IP, 10.0.0.2/32 as dest IP. So I want to drop all packets which are
addressed to 10.0.0.2 (source IP doesn't matter).
To test this I generate TCP packets with 2 different IP dest addresses -
10.0.0.1 and 10.0.0.2. Source IPs are generated randomly in the range of
10.0.0.0-10.255.255.255. Half of the traffic should be blocked and other
half passed to my application.

If I generate 20Mpps in sum - I see that 10 Mpps is dropped by rte_flow and
10 Mpps is passed to my application. So everything ok there.
But if I increase input traffic to 40/50/100/148Mpps I see that only max 15
Mpps is passed to my application (and it doesn't depend on input speed).
Other traffic is dropped. I checked that my generator properly produces
packets - IP dest addresses are  equally distributed among generated
traffic. If I generate packets which don't match rte_flow drop rule (e.g.
with IP dest 10.0.0.1 and 10.0.0.3) - I see that all traffic is passed to
my application without problems.

Another example. If I generate traffic with 3 different IP dest addresses -
10.0.0.1, 10.0.0.2, 10.0.0.3 at 60Mpps (20 Mpps for each IP dest where
10.0.0.2 matches rte_flow drop rule) - I get only 30Mpps in sum passed to
my application (15Mpps for each non-matched IP dest address instead of
20Mpps). If I replace 10.0.0.2 (which matches rte_flow drop rule) by
10.0.0.4 I see that all 60 Mpps passed to my application.

To summarize - if I generate traffic with IP dest address which matches
rte_flow drop rule other non-matched IP dest get only max 15Mpps for each.
But if traffic doesn't include IP dest which matches rte_flow drop rule
this 15Mpps limit is not in a play and the whole traffic is passed to my
application.

Is there any explanation for such behavior? Or i'm doing something wrong? I
haven't found any explanation in mlx5 PMD driver documentation.



Thanks, Dmitriy Stepanov

[-- Attachment #2: Type: text/html, Size: 2567 bytes --]

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2022-03-21  8:41 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-03-11 11:10 Flow rules performance with ConnectX-6 Dx Дмитрий Степанов
2022-03-11 11:37 ` Dmitry Kozlyuk
2022-03-11 12:58   ` Дмитрий Степанов
2022-03-18 14:54     ` Dmitry Kozlyuk
2022-03-21  8:40       ` Дмитрий Степанов

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).