* Re: [PATCH] net/mlx5: deprecate representor matching devarg
2025-07-23 9:07 ` Adrian Schollmeyer
@ 2025-07-23 9:30 ` Thomas Monjalon
2025-08-14 13:31 ` Dariusz Sosnowski
1 sibling, 0 replies; 5+ messages in thread
From: Thomas Monjalon @ 2025-07-23 9:30 UTC (permalink / raw)
To: Adrian Schollmeyer
Cc: Dariusz Sosnowski, dev, Michael Rossberg, Michael Pfeiffer
23/07/2025 11:07, Adrian Schollmeyer:
> Hi,
>
> On 16.07.25 11:38, Dariusz Sosnowski wrote:
>
> > Mark repr_matching_en device argument exposed by mlx5 PMD
> > as deprecated and schedule its removal in 25.11 release.
> >
> > [...]
> >
> > A new unified representor model, described in
> > https://fast.dpdk.org/events/slides/DPDK-2024-07-unified_representor.pdf
> > should be developed.
>
> The unified representor model seems to only address aggregation of
> traffic of all ports to a single representor (the e-switch manager port).
> In our use case with BlueField DPUs, however, traffic is always
> intercepted by the DPU and handled differently depending on whether the
> traffic came from one of the host representors (i.e. the host system or
> a VM) or one of the physical port representors (i.e. the the network
> fabric).
> These two traffic groups are usually processed by disjoint sets of CPUs
> processing disjoint sets of DPDK ports.
> With repr_matching_en=0, we can flexibly steer traffic from many
> represented ports to different representors (e.g. dummy SF representors)
> to aggregate traffic by port group on the receive path.
> To do this, we create flow rules that tag packets received from the
> represented ports accordingly and match traffic by this tag in ingress
> flow rules for the aggregation representors. This is only possible with
> repr_matching_en=0, since only then traffic coming from arbitrary ports
> can be matched.
Thanks a lot for your detailed feedback.
> Hence my question: Can such a flexible mapping still be achieved without
> repr_matching_en=0? Otherwise, removal of this devarg would break our
> use case.
I invite you to participate in discussions in the follow-up patches to come.
I understand we must find a solution to cover your use case.
I hope it can be part of an ethdev API, instead of mlx5 specific behavior.
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [PATCH] net/mlx5: deprecate representor matching devarg
2025-07-23 9:07 ` Adrian Schollmeyer
2025-07-23 9:30 ` Thomas Monjalon
@ 2025-08-14 13:31 ` Dariusz Sosnowski
1 sibling, 0 replies; 5+ messages in thread
From: Dariusz Sosnowski @ 2025-08-14 13:31 UTC (permalink / raw)
To: Adrian Schollmeyer
Cc: dev, Thomas Monjalon, Michael Rossberg, Michael Pfeiffer
On Wed, Jul 23, 2025 at 11:07:20AM +0200, Adrian Schollmeyer wrote:
> Hi,
>
> On 16.07.25 11:38, Dariusz Sosnowski wrote:
>
> > Mark repr_matching_en device argument exposed by mlx5 PMD
> > as deprecated and schedule its removal in 25.11 release.
> >
> > [...]
> >
> > A new unified representor model, described in
> > https://fast.dpdk.org/events/slides/DPDK-2024-07-unified_representor.pdf
> > should be developed.
>
> The unified representor model seems to only address aggregation of traffic
> of all ports to a single representor (the e-switch manager port).
> In our use case with BlueField DPUs, however, traffic is always intercepted
> by the DPU and handled differently depending on whether the traffic came
> from one of the host representors (i.e. the host system or a VM) or one of
> the physical port representors (i.e. the the network fabric).
> These two traffic groups are usually processed by disjoint sets of CPUs
> processing disjoint sets of DPDK ports.
> With repr_matching_en=0, we can flexibly steer traffic from many represented
> ports to different representors (e.g. dummy SF representors) to aggregate
> traffic by port group on the receive path.
> To do this, we create flow rules that tag packets received from the
> represented ports accordingly and match traffic by this tag in ingress flow
> rules for the aggregation representors. This is only possible with
> repr_matching_en=0, since only then traffic coming from arbitrary ports can
> be matched.
>
> Hence my question: Can such a flexible mapping still be achieved without
> repr_matching_en=0? Otherwise, removal of this devarg would break our use
> case.
Sorry for the delayed response.
You can replace the usage of repr_matching_en=0 with RSS flow action
executed directly from transfer flow rules.
This is allowed since FW version xx.43.1014 (LTS version, released last
October).
The overall flow would be similar to what you have currently,
but now all rules must be created in transfer template tables.
For example, the logic would look like this:
- Create a table with transfer attribute set,
with WIRE_ORIG specialization in group 1.
- Create 2 rules in that table:
- If tag == A, then execute RSS on queues dedicated for network fabric.
- If tag == B, then execute RSS on queues dedicated for host traffic.
- Create a table in group 0 with transfer attribute set:
- Create transfer rules for matching ports:
- If port is a physical port, tag packet with A
- If port is a host representor, tag packet with B.
I attached the example testpmd commands below [1] to try out this approach.
Please also note that when testing you would have to apply the following
patches (these are bug fixes):
- https://patches.dpdk.org/project/dpdk/patch/20250814132002.799724-1-dsosnowski@nvidia.com/
- https://patches.dpdk.org/project/dpdk/patch/20250814132411.799853-1-dsosnowski@nvidia.com/
Please let us know if this alternative solution would be suitable
for your use case or if you have any questions.
Best regards,
Dariusz Sosnowski
[1] testpmd example of RSS in transfer rules:
command line:
sudo ./build/app/dpdk-testpmd -a 08:00.1,dv_flow_en=2,representor=pf0vf0-1 -- --rxq=4 --txq=4 -i
commands:
# Configure flow engine on all ports
port stop 2
port stop 1
port stop 0
flow configure 0 queues_number 4 queues_size 64
flow configure 1 queues_number 4 queues_size 64
flow configure 2 queues_number 4 queues_size 64
port start 0
port start 1
port start 2
# Create a WIRE_ORIG transfer table doing RSS
# If tag == 0x11111111 -> mark 0x1111, send to queues 0 and 1
# If tag == 0x22222222 -> mark 0x2222, send to queues 2 and 3
flow pattern_template 0 create transfer relaxed yes pattern_template_id 20000 template tag data mask 0xffffffff index is 0 / end
flow actions_template 0 create transfer actions_template_id 20000 template mark / rss queues 0 1 end types ipv4 end / end mask mark / rss queues 0 1 end types ipv4 end / end
flow actions_template 0 create transfer actions_template_id 20001 template mark / rss queues 2 3 end types ipv4 end / end mask mark / rss queues 2 3 end types ipv4 end / end
flow template_table 0 create group 1 priority 0 transfer table_id 20000 rules_number 2 wire_orig pattern_template 20000 actions_template 20000 actions_template 20001
flow queue 0 create 0 template_table 20000 pattern_template 0 actions_template 0 postpone yes pattern tag data spec 0x11111111 / end actions mark id 0x1111 / rss / end
flow queue 0 create 0 template_table 20000 pattern_template 0 actions_template 1 postpone yes pattern tag data spec 0x22222222 / end actions mark id 0x2222 / rss / end
flow push 0 queue 0
flow pull 0 queue 0
# Create a transfer table for direction classification
flow pattern_template 0 create transfer relaxed yes pattern_template_id 10000 template represented_port ethdev_port_id mask 0xffff / end
flow actions_template 0 create transfer actions_template_id 10000
template modify_field op set dst_type tag dst_tag_index 0 src_type value src_value 00000000 width 32 / jump group 1 / end
mask modify_field op set dst_type tag dst_offset 0xffffffff dst_level 0xff dst_tag_index 0xff src_type value src_value 00000000 width 0xffffffff / jump group 0xffffffff / end
flow template_table 0 create group 0 priority 0 transfer table_id 10000 rules_number 64 pattern_template 10000 actions_template 10000
# If packet came from wire, tag with 0x11111111
flow queue 0 create 0 template_table 10000 pattern_template 0 actions_template 0 postpone yes pattern represented_port ethdev_port_id spec 0 / end
actions modify_field op set dst_type tag dst_tag_index 0 src_type value src_value 11111111 width 32 / jump group 1 / end
# If packet came from VF0 or VF1, tag with 0x22222222
flow queue 0 create 0 template_table 10000 pattern_template 0 actions_template 0 postpone yes pattern represented_port ethdev_port_id spec 1 / end
actions modify_field op set dst_type tag dst_tag_index 0 src_type value src_value 22222222 width 32 / jump group 1 / end
flow queue 0 create 0 template_table 10000 pattern_template 0 actions_template 0 postpone yes pattern represented_port ethdev_port_id spec 2 / end
actions modify_field op set dst_type tag dst_tag_index 0 src_type value src_value 22222222 width 32 / jump group 1 / end
flow push 0 queue 0
flow pull 0 queue 0
^ permalink raw reply [flat|nested] 5+ messages in thread