From: Dariusz Sosnowski <dsosnowski@nvidia.com>
To: Adrian Schollmeyer <a.schollmeyer@syseleven.de>
Cc: <dev@dpdk.org>, Thomas Monjalon <thomas@monjalon.net>,
Michael Rossberg <michael.rossberg@tu-ilmenau.de>,
Michael Pfeiffer <m.pfeiffer@syseleven.de>
Subject: Re: [PATCH] net/mlx5: deprecate representor matching devarg
Date: Thu, 14 Aug 2025 15:31:01 +0200 [thread overview]
Message-ID: <20250814133101.r4htfeid7tbt4lt2@ds-vm-debian.local> (raw)
In-Reply-To: <871879d3-b34c-4cea-9ae0-4715fb1c45fe@syseleven.de>
On Wed, Jul 23, 2025 at 11:07:20AM +0200, Adrian Schollmeyer wrote:
> Hi,
>
> On 16.07.25 11:38, Dariusz Sosnowski wrote:
>
> > Mark repr_matching_en device argument exposed by mlx5 PMD
> > as deprecated and schedule its removal in 25.11 release.
> >
> > [...]
> >
> > A new unified representor model, described in
> > https://fast.dpdk.org/events/slides/DPDK-2024-07-unified_representor.pdf
> > should be developed.
>
> The unified representor model seems to only address aggregation of traffic
> of all ports to a single representor (the e-switch manager port).
> In our use case with BlueField DPUs, however, traffic is always intercepted
> by the DPU and handled differently depending on whether the traffic came
> from one of the host representors (i.e. the host system or a VM) or one of
> the physical port representors (i.e. the the network fabric).
> These two traffic groups are usually processed by disjoint sets of CPUs
> processing disjoint sets of DPDK ports.
> With repr_matching_en=0, we can flexibly steer traffic from many represented
> ports to different representors (e.g. dummy SF representors) to aggregate
> traffic by port group on the receive path.
> To do this, we create flow rules that tag packets received from the
> represented ports accordingly and match traffic by this tag in ingress flow
> rules for the aggregation representors. This is only possible with
> repr_matching_en=0, since only then traffic coming from arbitrary ports can
> be matched.
>
> Hence my question: Can such a flexible mapping still be achieved without
> repr_matching_en=0? Otherwise, removal of this devarg would break our use
> case.
Sorry for the delayed response.
You can replace the usage of repr_matching_en=0 with RSS flow action
executed directly from transfer flow rules.
This is allowed since FW version xx.43.1014 (LTS version, released last
October).
The overall flow would be similar to what you have currently,
but now all rules must be created in transfer template tables.
For example, the logic would look like this:
- Create a table with transfer attribute set,
with WIRE_ORIG specialization in group 1.
- Create 2 rules in that table:
- If tag == A, then execute RSS on queues dedicated for network fabric.
- If tag == B, then execute RSS on queues dedicated for host traffic.
- Create a table in group 0 with transfer attribute set:
- Create transfer rules for matching ports:
- If port is a physical port, tag packet with A
- If port is a host representor, tag packet with B.
I attached the example testpmd commands below [1] to try out this approach.
Please also note that when testing you would have to apply the following
patches (these are bug fixes):
- https://patches.dpdk.org/project/dpdk/patch/20250814132002.799724-1-dsosnowski@nvidia.com/
- https://patches.dpdk.org/project/dpdk/patch/20250814132411.799853-1-dsosnowski@nvidia.com/
Please let us know if this alternative solution would be suitable
for your use case or if you have any questions.
Best regards,
Dariusz Sosnowski
[1] testpmd example of RSS in transfer rules:
command line:
sudo ./build/app/dpdk-testpmd -a 08:00.1,dv_flow_en=2,representor=pf0vf0-1 -- --rxq=4 --txq=4 -i
commands:
# Configure flow engine on all ports
port stop 2
port stop 1
port stop 0
flow configure 0 queues_number 4 queues_size 64
flow configure 1 queues_number 4 queues_size 64
flow configure 2 queues_number 4 queues_size 64
port start 0
port start 1
port start 2
# Create a WIRE_ORIG transfer table doing RSS
# If tag == 0x11111111 -> mark 0x1111, send to queues 0 and 1
# If tag == 0x22222222 -> mark 0x2222, send to queues 2 and 3
flow pattern_template 0 create transfer relaxed yes pattern_template_id 20000 template tag data mask 0xffffffff index is 0 / end
flow actions_template 0 create transfer actions_template_id 20000 template mark / rss queues 0 1 end types ipv4 end / end mask mark / rss queues 0 1 end types ipv4 end / end
flow actions_template 0 create transfer actions_template_id 20001 template mark / rss queues 2 3 end types ipv4 end / end mask mark / rss queues 2 3 end types ipv4 end / end
flow template_table 0 create group 1 priority 0 transfer table_id 20000 rules_number 2 wire_orig pattern_template 20000 actions_template 20000 actions_template 20001
flow queue 0 create 0 template_table 20000 pattern_template 0 actions_template 0 postpone yes pattern tag data spec 0x11111111 / end actions mark id 0x1111 / rss / end
flow queue 0 create 0 template_table 20000 pattern_template 0 actions_template 1 postpone yes pattern tag data spec 0x22222222 / end actions mark id 0x2222 / rss / end
flow push 0 queue 0
flow pull 0 queue 0
# Create a transfer table for direction classification
flow pattern_template 0 create transfer relaxed yes pattern_template_id 10000 template represented_port ethdev_port_id mask 0xffff / end
flow actions_template 0 create transfer actions_template_id 10000
template modify_field op set dst_type tag dst_tag_index 0 src_type value src_value 00000000 width 32 / jump group 1 / end
mask modify_field op set dst_type tag dst_offset 0xffffffff dst_level 0xff dst_tag_index 0xff src_type value src_value 00000000 width 0xffffffff / jump group 0xffffffff / end
flow template_table 0 create group 0 priority 0 transfer table_id 10000 rules_number 64 pattern_template 10000 actions_template 10000
# If packet came from wire, tag with 0x11111111
flow queue 0 create 0 template_table 10000 pattern_template 0 actions_template 0 postpone yes pattern represented_port ethdev_port_id spec 0 / end
actions modify_field op set dst_type tag dst_tag_index 0 src_type value src_value 11111111 width 32 / jump group 1 / end
# If packet came from VF0 or VF1, tag with 0x22222222
flow queue 0 create 0 template_table 10000 pattern_template 0 actions_template 0 postpone yes pattern represented_port ethdev_port_id spec 1 / end
actions modify_field op set dst_type tag dst_tag_index 0 src_type value src_value 22222222 width 32 / jump group 1 / end
flow queue 0 create 0 template_table 10000 pattern_template 0 actions_template 0 postpone yes pattern represented_port ethdev_port_id spec 2 / end
actions modify_field op set dst_type tag dst_tag_index 0 src_type value src_value 22222222 width 32 / jump group 1 / end
flow push 0 queue 0
flow pull 0 queue 0
prev parent reply other threads:[~2025-08-14 13:32 UTC|newest]
Thread overview: 5+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-07-16 9:38 Dariusz Sosnowski
2025-07-21 19:56 ` Thomas Monjalon
2025-07-23 9:07 ` Adrian Schollmeyer
2025-07-23 9:30 ` Thomas Monjalon
2025-08-14 13:31 ` Dariusz Sosnowski [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20250814133101.r4htfeid7tbt4lt2@ds-vm-debian.local \
--to=dsosnowski@nvidia.com \
--cc=a.schollmeyer@syseleven.de \
--cc=dev@dpdk.org \
--cc=m.pfeiffer@syseleven.de \
--cc=michael.rossberg@tu-ilmenau.de \
--cc=thomas@monjalon.net \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).