DPDK usage discussions
 help / color / mirror / Atom feed
* Failed to install QUEUE action using async API on ConnectX-6 NIC
@ 2024-06-12 14:44 Tao Li
  2024-06-12 16:08 ` Dariusz Sosnowski
  0 siblings, 1 reply; 3+ messages in thread
From: Tao Li @ 2024-06-12 14:44 UTC (permalink / raw)
  To: users; +Cc: tao.li06

[-- Attachment #1: Type: text/plain, Size: 3431 bytes --]

Hi all,

I am using the async API to install flow rules to perform the QUEUE action to capture packets matching a certain pattern for processing by a DPDK application. The ConnectX-6 NIC is configured in multiport e-switch mode, as outlined in the documentation (https://doc.dpdk.org/guides/nics/mlx5.html#multiport-e-switch). Currently, I am facing an issue where I cannot create the corresponding templates for this purpose. The command to start test-pmd and create pattern and action templates are as follows:

<Command to start test-pmd>
sudo ./dpdk-testpmd -a 3b:00.0,dv_flow_en=2,representor=pf0-1vf0 -- -i --rxq=1 --txq=1 --flow-isolate-all
</Command to start test-pmd>

<Not working test-pmd commands>
port stop all
flow configure 0 queues_number 1 queues_size 10 counters_number 0 aging_counters_number 0 meters_number 0 flags 0
flow configure 1 queues_number 1 queues_size 10 counters_number 0 aging_counters_number 0 meters_number 0 flags 0
flow configure 2 queues_number 1 queues_size 10 counters_number 0 aging_counters_number 0 meters_number 0 flags 0
flow configure 3 queues_number 1 queues_size 10 counters_number 0 aging_counters_number 0 meters_number 0 flags 0
port start all

flow pattern_template 0 create transfer relaxed no pattern_template_id 10  template represented_port ethdev_port_id is 0 / eth type is 0x86dd  / end
flow actions_template 0 create ingress  actions_template_id 10  template queue / end mask queue index 0xffff / end
flow template_table 0 create  group 0 priority 0  transfer wire_orig table_id 5 rules_number 8 pattern_template 10 actions_template 10
flow queue 0 create 0 template_table 5 pattern_template 0 actions_template 0 postpone no pattern represented_port ethdev_port_id is 0 / eth type is 0x86dd  / end actions queue index 0 / end
flow push 0 queue 0
</Not working test-pmd commands>

The error encountered during the execution of the above test-pmd commands is:

<Encounted error>
mlx5_net: [mlx5dr_action_print_combo]: Invalid action_type sequence
mlx5_net: [mlx5dr_action_print_combo]: TIR
mlx5_net: [mlx5dr_matcher_check_and_process_at]: Invalid combination in action template
mlx5_net: [mlx5dr_matcher_bind_at]: Invalid at 0
</Encounted error>

Upon closer inspection of the driver code in DPDK 23.11 (also  the latest DPDK main branch), it appears that the error is due to the fact that MLX5DR_ACTION_TYP_TIR is not listed as a valid action in the MLX5DR_TABLE_TYPE_FDB field. If the following patch is applied, the error is resolved, and the DPDK application is able to capture matching packets:

<patch to apply>
diff --git a/drivers/net/mlx5/hws/mlx5dr_action.c b/drivers/net/mlx5/hws/mlx5dr_action.c
index 862ee3e332..c444ec761e 100644
--- a/drivers/net/mlx5/hws/mlx5dr_action.c
+++ b/drivers/net/mlx5/hws/mlx5dr_action.c
@@ -85,6 +85,7 @@ static const uint32_t action_order_arr[MLX5DR_TABLE_TYPE_MAX][MLX5DR_ACTION_TYP_
                BIT(MLX5DR_ACTION_TYP_VPORT) |
                BIT(MLX5DR_ACTION_TYP_DROP) |
                BIT(MLX5DR_ACTION_TYP_DEST_ROOT) |
+               BIT(MLX5DR_ACTION_TYP_TIR) |
                BIT(MLX5DR_ACTION_TYP_DEST_ARRAY),
                BIT(MLX5DR_ACTION_TYP_LAST),
        },
</patch to apply>

I would greatly appreciate it if anyone could provide insight into whether this behavior is intentional or if it is a bug in the driver. Many thanks in advance.
Best regards,
Tao

[-- Attachment #2: Type: text/html, Size: 9657 bytes --]

^ permalink raw reply	[flat|nested] 3+ messages in thread

* RE: Failed to install QUEUE action using async API on ConnectX-6 NIC
  2024-06-12 14:44 Failed to install QUEUE action using async API on ConnectX-6 NIC Tao Li
@ 2024-06-12 16:08 ` Dariusz Sosnowski
  2024-06-14  9:30   ` Tao Li
  0 siblings, 1 reply; 3+ messages in thread
From: Dariusz Sosnowski @ 2024-06-12 16:08 UTC (permalink / raw)
  To: Tao Li, users; +Cc: tao.li06


Hi,

> From: Tao Li <byteocean@hotmail.com> 
> Sent: Wednesday, June 12, 2024 16:45
> To: users@dpdk.org
> Cc: tao.li06@sap.com
> Subject: Failed to install QUEUE action using async API on ConnectX-6 NIC
> 
> Hi all,
> 
> I am using the async API to install flow rules to perform the QUEUE action to capture packets matching a certain pattern for processing by a DPDK application. The ConnectX-6 NIC is configured in multiport e-switch mode, as outlined in the documentation (https://doc.dpdk.org/guides/nics/mlx5.html#multiport-e-switch). Currently, I am facing an issue where I cannot create the corresponding templates for this purpose. The command to start test-pmd and create pattern and action templates are as follows:
> 
> <Command to start test-pmd>
> sudo ./dpdk-testpmd -a 3b:00.0,dv_flow_en=2,representor=pf0-1vf0 -- -i --rxq=1 --txq=1 --flow-isolate-all
> </Command to start test-pmd>
> 
> <Not working test-pmd commands>
> port stop all
> flow configure 0 queues_number 1 queues_size 10 counters_number 0 aging_counters_number 0 meters_number 0 flags 0
> flow configure 1 queues_number 1 queues_size 10 counters_number 0 aging_counters_number 0 meters_number 0 flags 0
> flow configure 2 queues_number 1 queues_size 10 counters_number 0 aging_counters_number 0 meters_number 0 flags 0
> flow configure 3 queues_number 1 queues_size 10 counters_number 0 aging_counters_number 0 meters_number 0 flags 0
> port start all
> 
> flow pattern_template 0 create transfer relaxed no pattern_template_id 10  template represented_port ethdev_port_id is 0 / eth type is 0x86dd  / end
> flow actions_template 0 create ingress  actions_template_id 10  template queue / end mask queue index 0xffff / end
> flow template_table 0 create  group 0 priority 0  transfer wire_orig table_id 5 rules_number 8 pattern_template 10 actions_template 10
> flow queue 0 create 0 template_table 5 pattern_template 0 actions_template 0 postpone no pattern represented_port ethdev_port_id is 0 / eth type is 0x86dd  / end actions queue index 0 / end
> flow push 0 queue 0
> </Not working test-pmd commands>
> 
> The error encountered during the execution of the above test-pmd commands is:
> 
> <Encounted error>
> mlx5_net: [mlx5dr_action_print_combo]: Invalid action_type sequence
> mlx5_net: [mlx5dr_action_print_combo]: TIR
> mlx5_net: [mlx5dr_matcher_check_and_process_at]: Invalid combination in action template
> mlx5_net: [mlx5dr_matcher_bind_at]: Invalid at 0
> </Encounted error>
> 
> Upon closer inspection of the driver code in DPDK 23.11 (also  the latest DPDK main branch), it appears that the error is due to the fact that MLX5DR_ACTION_TYP_TIR is not listed as a valid action in the MLX5DR_TABLE_TYPE_FDB field. If the following patch is applied, the error is resolved, and the DPDK application is able to capture matching packets:
> 
> <patch to apply>
> diff --git a/drivers/net/mlx5/hws/mlx5dr_action.c b/drivers/net/mlx5/hws/mlx5dr_action.c
> index 862ee3e332..c444ec761e 100644
> --- a/drivers/net/mlx5/hws/mlx5dr_action.c
> +++ b/drivers/net/mlx5/hws/mlx5dr_action.c
> @@ -85,6 +85,7 @@ static const uint32_t action_order_arr[MLX5DR_TABLE_TYPE_MAX][MLX5DR_ACTION_TYP_
>                 BIT(MLX5DR_ACTION_TYP_VPORT) |
>                 BIT(MLX5DR_ACTION_TYP_DROP) |
>                 BIT(MLX5DR_ACTION_TYP_DEST_ROOT) |
> +               BIT(MLX5DR_ACTION_TYP_TIR) |
>                 BIT(MLX5DR_ACTION_TYP_DEST_ARRAY),
>                 BIT(MLX5DR_ACTION_TYP_LAST),
>         }, 
> </patch to apply>
> I would greatly appreciate it if anyone could provide insight into whether this behavior is intentional or if it is a bug in the driver. Many thanks in advance.

The fact that it works with this code change is not an intended behavior and we do not support using QUEUE and RSS actions on transfer flow tables.

Also, there's another issue with table and actions template attributes:

- table is using transfer,
- actions template is using ingress.

Using them together is incorrect.
In the upcoming DPDK release, we are adding additional validations which would guard against that.

With your configuration, it is enough that you create an ingress flow table on port 0,
which will contain a flow rule matching IPv6 traffic and forwarding it to a queue on port 0.

By default, any traffic which is not explicitly dropped or forwarded in E-Switch, will be handled by ingress flow rules of the port on which this packet was received.
Since you're running with flow isolation enabled, this means that traffic will go to kernel interface, unless you explicitly match it on ingress.

> 
> Best regards,
> Tao

Best regards,
Dariusz Sosnowski

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: Failed to install QUEUE action using async API on ConnectX-6 NIC
  2024-06-12 16:08 ` Dariusz Sosnowski
@ 2024-06-14  9:30   ` Tao Li
  0 siblings, 0 replies; 3+ messages in thread
From: Tao Li @ 2024-06-14  9:30 UTC (permalink / raw)
  To: Dariusz Sosnowski, users; +Cc: tao.li06

[-- Attachment #1: Type: text/plain, Size: 8574 bytes --]

Hi Dariusz,

Thanks for your speedy reply and provided hints. I am able to capture matched packets on one PF for the DPDK application by installing the following async rules based on your suggestions.

<Command to install QUEUE action>
port stop all
flow configure 0 queues_number 1 queues_size 10 counters_number 0 aging_counters_number 0 meters_number 0 flags 0
flow configure 1 queues_number 1 queues_size 10 counters_number 0 aging_counters_number 0 meters_number 0 flags 0
flow configure 2 queues_number 1 queues_size 10 counters_number 0 aging_counters_number 0 meters_number 0 flags 0
flow configure 3 queues_number 1 queues_size 10 counters_number 0 aging_counters_number 0 meters_number 0 flags 0
port start all

flow pattern_template 0 create ingress relaxed no pattern_template_id 10  template  eth type is 0x86dd  / end
flow actions_template 0 create ingress  actions_template_id 10  template queue / end mask queue index 0xffff / end
flow template_table 0 create group 0  priority 0  ingress  table_id 5 rules_number 8 pattern_template 10 actions_template 10
flow queue 0 create 0 template_table 5 pattern_template 0 actions_template 0 postpone no pattern  eth type is 0x86dd  / end actions queue index 0 / end

flow push 0 queue 0
</Command to install QUEUE action>

In our application, once the DPDK application processes the captured packets, it may need to install additional flow rules to perform decap/encap/port actions to deliver packets from PFs to VFs. These dynamically installed flow rules might look like those shown below, which you may have seen in my previous emails.

<Command to install finer matching and port action rule>
flow pattern_template 0 create transfer relaxed no pattern_template_id 20  template represented_port ethdev_port_id is 0 / eth type is 0x86dd / ipv6 dst is ffff:ffff:ffff:ffff:ffff:ffff:ffff:ffff / end

set raw_decap 0 eth  / ipv6 / end_set
set raw_encap 0 eth src is 11:22:33:44:55:66 dst is 66:9d:a7:fd:fb:43 type is 0x0800 / end_set

flow actions_template 0 create transfer  actions_template_id 20  template raw_decap index 0 / raw_encap index 0 / represented_port / end mask raw_decap index 0 / raw_encap index 0 /  represented_port  / end

flow template_table 0 create  group 0 priority 0  transfer wire_orig table_id 6 rules_number 8 pattern_template 20 actions_template 20

flow queue 0 create 0 template_table 6 pattern_template 0 actions_template 0 postpone no pattern represented_port ethdev_port_id is 0 / eth type is 0x86dd / ipv6 dst is abcd:efgh:1234:5678:0:1:0:1  / end actions raw_decap index 0 / raw_encap index 0 /  represented_port ethdev_port_id 3 / end
</Command to install finer matching and port action rule>


In the synchronous installation approach, we achieved our goal by installing these flow rules with finer granularity matching patterns, similar to the above, into the same table as the QUEUE action rules. As you pointed out, it is not intended to support QUEUE and RSS actions on transfer flow tables in async mode. Installing these decap/encap/port action rules in the same table is not viable due to the ingress attribute of the table. Jumping between ingress and transfer tables is also not an option since they are not within the same eswitch domain.

To summarize the demands,  we need to capture a portion of packets for the DPDK application while performing decap/encap/port actions on other portions of packets on the same interface. Could you provide additional hints on how to address this use case? Thanks in advance.

Best regards,
Tao Li

From: Dariusz Sosnowski <dsosnowski@nvidia.com>
Date: Wednesday, 12. June 2024 at 18:08
To: Tao Li <byteocean@hotmail.com>, users@dpdk.org <users@dpdk.org>
Cc: tao.li06@sap.com <tao.li06@sap.com>
Subject: RE: Failed to install QUEUE action using async API on ConnectX-6 NIC

Hi,

> From: Tao Li <byteocean@hotmail.com>
> Sent: Wednesday, June 12, 2024 16:45
> To: users@dpdk.org
> Cc: tao.li06@sap.com
> Subject: Failed to install QUEUE action using async API on ConnectX-6 NIC
>
> Hi all,
>
> I am using the async API to install flow rules to perform the QUEUE action to capture packets matching a certain pattern for processing by a DPDK application. The ConnectX-6 NIC is configured in multiport e-switch mode, as outlined in the documentation (https://doc.dpdk.org/guides/nics/mlx5.html#multiport-e-switch). Currently, I am facing an issue where I cannot create the corresponding templates for this purpose. The command to start test-pmd and create pattern and action templates are as follows:
>
> <Command to start test-pmd>
> sudo ./dpdk-testpmd -a 3b:00.0,dv_flow_en=2,representor=pf0-1vf0 -- -i --rxq=1 --txq=1 --flow-isolate-all
> </Command to start test-pmd>
>
> <Not working test-pmd commands>
> port stop all
> flow configure 0 queues_number 1 queues_size 10 counters_number 0 aging_counters_number 0 meters_number 0 flags 0
> flow configure 1 queues_number 1 queues_size 10 counters_number 0 aging_counters_number 0 meters_number 0 flags 0
> flow configure 2 queues_number 1 queues_size 10 counters_number 0 aging_counters_number 0 meters_number 0 flags 0
> flow configure 3 queues_number 1 queues_size 10 counters_number 0 aging_counters_number 0 meters_number 0 flags 0
> port start all
>
> flow pattern_template 0 create transfer relaxed no pattern_template_id 10  template represented_port ethdev_port_id is 0 / eth type is 0x86dd  / end
> flow actions_template 0 create ingress  actions_template_id 10  template queue / end mask queue index 0xffff / end
> flow template_table 0 create  group 0 priority 0  transfer wire_orig table_id 5 rules_number 8 pattern_template 10 actions_template 10
> flow queue 0 create 0 template_table 5 pattern_template 0 actions_template 0 postpone no pattern represented_port ethdev_port_id is 0 / eth type is 0x86dd  / end actions queue index 0 / end
> flow push 0 queue 0
> </Not working test-pmd commands>
>
> The error encountered during the execution of the above test-pmd commands is:
>
> <Encounted error>
> mlx5_net: [mlx5dr_action_print_combo]: Invalid action_type sequence
> mlx5_net: [mlx5dr_action_print_combo]: TIR
> mlx5_net: [mlx5dr_matcher_check_and_process_at]: Invalid combination in action template
> mlx5_net: [mlx5dr_matcher_bind_at]: Invalid at 0
> </Encounted error>
>
> Upon closer inspection of the driver code in DPDK 23.11 (also  the latest DPDK main branch), it appears that the error is due to the fact that MLX5DR_ACTION_TYP_TIR is not listed as a valid action in the MLX5DR_TABLE_TYPE_FDB field. If the following patch is applied, the error is resolved, and the DPDK application is able to capture matching packets:
>
> <patch to apply>
> diff --git a/drivers/net/mlx5/hws/mlx5dr_action.c b/drivers/net/mlx5/hws/mlx5dr_action.c
> index 862ee3e332..c444ec761e 100644
> --- a/drivers/net/mlx5/hws/mlx5dr_action.c
> +++ b/drivers/net/mlx5/hws/mlx5dr_action.c
> @@ -85,6 +85,7 @@ static const uint32_t action_order_arr[MLX5DR_TABLE_TYPE_MAX][MLX5DR_ACTION_TYP_
>                 BIT(MLX5DR_ACTION_TYP_VPORT) |
>                 BIT(MLX5DR_ACTION_TYP_DROP) |
>                 BIT(MLX5DR_ACTION_TYP_DEST_ROOT) |
> +               BIT(MLX5DR_ACTION_TYP_TIR) |
>                 BIT(MLX5DR_ACTION_TYP_DEST_ARRAY),
>                 BIT(MLX5DR_ACTION_TYP_LAST),
>         },
> </patch to apply>
> I would greatly appreciate it if anyone could provide insight into whether this behavior is intentional or if it is a bug in the driver. Many thanks in advance.

The fact that it works with this code change is not an intended behavior and we do not support using QUEUE and RSS actions on transfer flow tables.

Also, there's another issue with table and actions template attributes:

- table is using transfer,
- actions template is using ingress.

Using them together is incorrect.
In the upcoming DPDK release, we are adding additional validations which would guard against that.

With your configuration, it is enough that you create an ingress flow table on port 0,
which will contain a flow rule matching IPv6 traffic and forwarding it to a queue on port 0.

By default, any traffic which is not explicitly dropped or forwarded in E-Switch, will be handled by ingress flow rules of the port on which this packet was received.
Since you're running with flow isolation enabled, this means that traffic will go to kernel interface, unless you explicitly match it on ingress.

>
> Best regards,
> Tao

Best regards,
Dariusz Sosnowski

[-- Attachment #2: Type: text/html, Size: 16417 bytes --]

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2024-06-14  9:31 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-06-12 14:44 Failed to install QUEUE action using async API on ConnectX-6 NIC Tao Li
2024-06-12 16:08 ` Dariusz Sosnowski
2024-06-14  9:30   ` Tao Li

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).