From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id C270F45459 for ; Fri, 14 Jun 2024 11:31:02 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 623FD402DD; Fri, 14 Jun 2024 11:31:02 +0200 (CEST) Received: from EUR01-DB5-obe.outbound.protection.outlook.com (mail-db5eur01olkn2101.outbound.protection.outlook.com [40.92.64.101]) by mails.dpdk.org (Postfix) with ESMTP id 87771402D3 for ; Fri, 14 Jun 2024 11:31:01 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=n5LCwxhER3BFQOxkDzprbZjtFuHQMDiaTBehS6APE14lCn1aXX+BVzs2n+6EmTTojKbls6zkLdQmMzf92yjj2Iied1IyM6iMy4hjHWr+esfj+sVMj9H6C7ioJEQOvUVHGILg1tJeVpMPZTd8c4n5zi/KdI3lRCegI5z2tboBEUknKb+PyjjndJqf3EwZV2anwQN/LQgCqcbfzL3G6c7Wk9lHWfKtJ2e43fEUn/wK84Sx0rolpKXV8tHhG/rylweiqmisRGhC3IsGZogwF1nbmXOTbagFj12ckNWSu3RkZomigBnAGHzwnOYzTUnPAjx3FIGsNP6hyYpc1N6y1y03eg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=Fs+9rF5Wf82npLvrWOkZNPBfVcHuuFnziHMwdFolhpA=; b=OnMpNv2yDiK3of1psy1HncC2y7tR9SGEC+z/kXATkf1dgD0fAevfgtLHZmfRKtO0B1mA2Bl8ThyRtWtHbEgCQK5vCbBDlKjG4QQ5mQ3RPK+4hUiSSkh+JraUcxqsPksiqAOvrKgTQY2xN7vfkOXJnFCtN/F4WRRmZWoiVKKji6RAmmlmnK8xXrXLkCbKs+50JjSUS38gHPr0TRZpBaJb0+V0Jk+VyIGFGNRWjDnX10uV8UXbcpBFHe/pki0ZSHnGWQigCflL2yEaCL5r0gi9DEWNonHDwsO48z03ii1JJOWRx/r79R2JNAtSRzIrLMOLyZCz3zblKjtraf3ps1htPQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=none; dmarc=none; dkim=none; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=hotmail.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=Fs+9rF5Wf82npLvrWOkZNPBfVcHuuFnziHMwdFolhpA=; b=Oc4fSFxVWftJonGWVz7WyjPL0VJN3v4NKWgNeAKdOfWNElWG5jD/5P+/eDDTLb0OCgTPWa/L89sxQtFRlgDHUvXS9eNeqQYzWZVTf9HTPXpNjB5XLIZfboK2ztPk1+qldfPW8UxstqNuDo5qPcpin37wcQXRMCqmgkXHrlcADsWVTnDdp06gnDjoa05ydGnsehmE7F3ZpPlxeH8X+EMr3BFu5vnZRdAwH523NktZDw23VILNekNF4uY+FZSVihEZJRQ5jlnPhcWUzgD0rL5qbje3y2QbtsUp/QktKPEuDN45GG2Wtf+4WCcVPoQJcGtJKeQI3O5KBoBIok4mJ0wBEg== Received: from PR2PR09MB3081.eurprd09.prod.outlook.com (2603:10a6:101:28::12) by GVXPR09MB6655.eurprd09.prod.outlook.com (2603:10a6:150:115::5) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7677.24; Fri, 14 Jun 2024 09:30:59 +0000 Received: from PR2PR09MB3081.eurprd09.prod.outlook.com ([fe80::6c30:b9dd:9296:2060]) by PR2PR09MB3081.eurprd09.prod.outlook.com ([fe80::6c30:b9dd:9296:2060%6]) with mapi id 15.20.7633.037; Fri, 14 Jun 2024 09:30:59 +0000 From: Tao Li To: Dariusz Sosnowski , "users@dpdk.org" CC: "tao.li06@sap.com" Subject: Re: Failed to install QUEUE action using async API on ConnectX-6 NIC Thread-Topic: Failed to install QUEUE action using async API on ConnectX-6 NIC Thread-Index: AQHavNXYpyAbIHzDf0S+B52tsDu4x7HEQ67QgAFmGCE= Date: Fri, 14 Jun 2024 09:30:59 +0000 Message-ID: References: In-Reply-To: Accept-Language: en-GB, en-US Content-Language: en-GB X-MS-Has-Attach: X-MS-TNEF-Correlator: x-tmn: [T3xc6oc2QpF3nuhxHogqFRM1hrTS08aX] x-ms-publictraffictype: Email x-ms-traffictypediagnostic: PR2PR09MB3081:EE_|GVXPR09MB6655:EE_ x-ms-office365-filtering-correlation-id: d0fc17e7-f4c6-4880-e63c-08dc8c54b39a x-microsoft-antispam: BCL:0; ARA:14566002|9400799019|461199023|3412199020|4302099008|102099029|440099023|1602099007; x-microsoft-antispam-message-info: wREL1x8jeVJGKTeJDTUS96XjeWjqrZRnViEz3ax2/m1tiRZQQ+R/VWCnjTHhSlTcMXzPZJrPIbu8oRKhT8lo5baVqaKVPyIoETuHvGoqXJMLcKSxUPLLQmcwJ+qwDQBOrP2eYQWlinS7Nvewo2tQCS+JKKNWhfPRtVqkDuEqm5qqLUl4hZIm/XA18g8Gy01XD1c74/x4h83aZkQjTQwMDPSl6HvRK0F6gkHZWHnxyWYliMPqx7aa55yqv8P23NPl1tlv1uYAQYl3An+r8/k+8ebE5w59rSChUoeyGDmnYlfOyx07DShFPd3bpPCDatsLT+xPfk2Difml076wTKzz2jRT2Fx70xXpOMXVmwLurK5EYz+HhrZZOZ1qXGQi1Ai+SWHTIkmNZBvpYHSdFkAliB/kDZpMio3tDKQYB65WSQ/W91Ar13UBUZlWY6HO4kaQV/Y945izZMelT9chePpIPQ95TEFL6ftgLDxTOeiMAP7mymiqZFpqnetc/F2+e0GiO2mmMlnz/wMv6yCrr0v7STCTqkqdDXzToIP8xSRRUasr9n0I4NwDbfpi+7cPr1YxHmswpB+XXjBPyQl8Me55GRk9bPz5OcHF79twQkhlLSKF4ehtH7Zy8xfctQ89bXbaDo4ksnR31tlrMWiBrtmlmUFJJm3dGEQQy6ukpv0jaNQJBLmYS/cnxLpJUXhezu7wZikmULkfKm2qUbn1tvDL5Vq/EkyS8uE3nWgaELdRaos= x-ms-exchange-antispam-messagedata-chunkcount: 1 x-ms-exchange-antispam-messagedata-0: =?us-ascii?Q?4udG3Uk4M8Qg2mTjbM6S/8iW0EKQErVzVNLGGjt08BIEEqA7LJLTpwtKyucy?= =?us-ascii?Q?6ZBZd14F663QsWytvu2KhnPx6qoDuASG/9oOgegmDNWOSUy6fwc5kE6LxecA?= =?us-ascii?Q?eAHTIY/JndbIINVMH0irubHLx6swMAd+D4kLdNft/mKTGJ52DVTz6Jsn3NVB?= =?us-ascii?Q?IXyrpdMwhKwMSEy4ShhXPpGRoW7UnBJzqkqcr0pkel8I2yGmxi2vqf6KtC00?= =?us-ascii?Q?lC78dCvxAWUG0z3Cy8nCI0bz8uvgDsQ15JqnG0hywp6OtytdValA27Ujeqtr?= =?us-ascii?Q?eebRN2paZWo+yl+9DKRjRYx54mv8L1QtjsaNxdFY62S+mzpbtHwWoxupMIUz?= =?us-ascii?Q?aF/KRhgWhgE+h6KZOMpRJt4UfNiHH54f2iZ1H9i8FCokwaiaeLdxTaWJcVG/?= =?us-ascii?Q?8c6hIGnEiRcuFtc7q/JgnZ4a9hN4FUXKnbNLi1AZUWuoHsWP0UebnUiEcaWj?= =?us-ascii?Q?4nvWIIBRuemAWibtCo8YtpPdAgIMi24GKXm8clr7aEpjwtQiadPBXyYGbzx3?= =?us-ascii?Q?RCHBxD8zrqF5I9TJ6xNvKHXc3k5CG3qmpqSk59Pyom/Aleh5JRKkjeQFQxBb?= =?us-ascii?Q?i9n/zJQKIbJE0GJhukxvW2Ny2S216BiKU3T1BPwGg1/ZoBy4shWrt6+sk+J3?= =?us-ascii?Q?mW55Lt7xtAQE+zP7RqZCluZUbIBi5cTLZvcmAs6N3VpuSlajbmF9BsQWVkf2?= =?us-ascii?Q?n2f2SDHyTB1TDx6FebQ8WGKcg+xnR/TV0xQhUfZEzJVqx95dA4Q6MRNkCrBw?= =?us-ascii?Q?Uvqo9PByztU8Vh7/IWVMsVjnxcQCJSvOKH1WozR3OMy5VigWu4fV/YzzDlTY?= =?us-ascii?Q?BWHviu2nKG82TtEWSEIg7b/9juqgqYj0A0f/+o7GqSs+XHIPq6iZqJnwuBjU?= =?us-ascii?Q?d1ga5OnsggmuY+5F++N8eAqQwsI2jLMy5s3ck2DLI7Z+3jxKDhWDHorY3zQc?= =?us-ascii?Q?lC9ET8Cbf0u48h/Hpt/ynqP86DCLthrSr9gBz12++zqEp9sy6SoF0QuvWBBd?= =?us-ascii?Q?8l9ZPAgJUXGV+juSI7S1QKjx3zw3/40Yhne+3KUokw0jjk3W08I/GqCxBCFu?= =?us-ascii?Q?yFLJlF1IbePGfmjrF0XqZX2XsIlWZXXaFe7nXESMfXkt9KRLiYjxYZXVsmZC?= =?us-ascii?Q?o96cro2Q59VfYBzlKnfXy+93KMCDtUQldgmykg49uNo8gk8s1GxQCOZpISt9?= =?us-ascii?Q?iaSPegz/kxOBk7kQaAzeVgAAdmM49pndKEPT2E1K9X/mjBMDc9dbIF0N0SD5?= =?us-ascii?Q?gDhhOd9T62UM1zFxtfya?= Content-Type: multipart/alternative; boundary="_000_PR2PR09MB3081DE529B91D2776EE0ED1CA3C12PR2PR09MB3081eurp_" MIME-Version: 1.0 X-OriginatorOrg: sct-15-20-4755-11-msonline-outlook-64da6.templateTenant X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-AuthSource: PR2PR09MB3081.eurprd09.prod.outlook.com X-MS-Exchange-CrossTenant-RMS-PersistedConsumerOrg: 00000000-0000-0000-0000-000000000000 X-MS-Exchange-CrossTenant-Network-Message-Id: d0fc17e7-f4c6-4880-e63c-08dc8c54b39a X-MS-Exchange-CrossTenant-originalarrivaltime: 14 Jun 2024 09:30:59.7787 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: 84df9e7f-e9f6-40af-b435-aaaaaaaaaaaa X-MS-Exchange-CrossTenant-rms-persistedconsumerorg: 00000000-0000-0000-0000-000000000000 X-MS-Exchange-Transport-CrossTenantHeadersStamped: GVXPR09MB6655 X-BeenThere: users@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK usage discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: users-bounces@dpdk.org --_000_PR2PR09MB3081DE529B91D2776EE0ED1CA3C12PR2PR09MB3081eurp_ Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable Hi Dariusz, Thanks for your speedy reply and provided hints. I am able to capture match= ed packets on one PF for the DPDK application by installing the following a= sync rules based on your suggestions. port stop all flow configure 0 queues_number 1 queues_size 10 counters_number 0 aging_cou= nters_number 0 meters_number 0 flags 0 flow configure 1 queues_number 1 queues_size 10 counters_number 0 aging_cou= nters_number 0 meters_number 0 flags 0 flow configure 2 queues_number 1 queues_size 10 counters_number 0 aging_cou= nters_number 0 meters_number 0 flags 0 flow configure 3 queues_number 1 queues_size 10 counters_number 0 aging_cou= nters_number 0 meters_number 0 flags 0 port start all flow pattern_template 0 create ingress relaxed no pattern_template_id 10 t= emplate eth type is 0x86dd / end flow actions_template 0 create ingress actions_template_id 10 template qu= eue / end mask queue index 0xffff / end flow template_table 0 create group 0 priority 0 ingress table_id 5 rules= _number 8 pattern_template 10 actions_template 10 flow queue 0 create 0 template_table 5 pattern_template 0 actions_template = 0 postpone no pattern eth type is 0x86dd / end actions queue index 0 / en= d flow push 0 queue 0 In our application, once the DPDK application processes the captured packet= s, it may need to install additional flow rules to perform decap/encap/port= actions to deliver packets from PFs to VFs. These dynamically installed fl= ow rules might look like those shown below, which you may have seen in my p= revious emails. flow pattern_template 0 create transfer relaxed no pattern_template_id 20 = template represented_port ethdev_port_id is 0 / eth type is 0x86dd / ipv6 d= st is ffff:ffff:ffff:ffff:ffff:ffff:ffff:ffff / end set raw_decap 0 eth / ipv6 / end_set set raw_encap 0 eth src is 11:22:33:44:55:66 dst is 66:9d:a7:fd:fb:43 type = is 0x0800 / end_set flow actions_template 0 create transfer actions_template_id 20 template r= aw_decap index 0 / raw_encap index 0 / represented_port / end mask raw_deca= p index 0 / raw_encap index 0 / represented_port / end flow template_table 0 create group 0 priority 0 transfer wire_orig table_= id 6 rules_number 8 pattern_template 20 actions_template 20 flow queue 0 create 0 template_table 6 pattern_template 0 actions_template = 0 postpone no pattern represented_port ethdev_port_id is 0 / eth type is 0x= 86dd / ipv6 dst is abcd:efgh:1234:5678:0:1:0:1 / end actions raw_decap ind= ex 0 / raw_encap index 0 / represented_port ethdev_port_id 3 / end In the synchronous installation approach, we achieved our goal by installin= g these flow rules with finer granularity matching patterns, similar to the= above, into the same table as the QUEUE action rules. As you pointed out, = it is not intended to support QUEUE and RSS actions on transfer flow tables= in async mode. Installing these decap/encap/port action rules in the same = table is not viable due to the ingress attribute of the table. Jumping betw= een ingress and transfer tables is also not an option since they are not wi= thin the same eswitch domain. To summarize the demands, we need to capture a portion of packets for the = DPDK application while performing decap/encap/port actions on other portion= s of packets on the same interface. Could you provide additional hints on h= ow to address this use case? Thanks in advance. Best regards, Tao Li From: Dariusz Sosnowski Date: Wednesday, 12. June 2024 at 18:08 To: Tao Li , users@dpdk.org Cc: tao.li06@sap.com Subject: RE: Failed to install QUEUE action using async API on ConnectX-6 N= IC Hi, > From: Tao Li > Sent: Wednesday, June 12, 2024 16:45 > To: users@dpdk.org > Cc: tao.li06@sap.com > Subject: Failed to install QUEUE action using async API on ConnectX-6 NIC > > Hi all, > > I am using the async API to install flow rules to perform the QUEUE actio= n to capture packets matching a certain pattern for processing by a DPDK ap= plication. The ConnectX-6 NIC is configured in multiport e-switch mode, as = outlined in the documentation (https://doc.dpdk.org/guides/nics/mlx5.html#m= ultiport-e-switch). Currently, I am facing an issue where I cannot create t= he corresponding templates for this purpose. The command to start test-pmd = and create pattern and action templates are as follows: > > > sudo ./dpdk-testpmd -a 3b:00.0,dv_flow_en=3D2,representor=3Dpf0-1vf0 -- -= i --rxq=3D1 --txq=3D1 --flow-isolate-all > > > > port stop all > flow configure 0 queues_number 1 queues_size 10 counters_number 0 aging_c= ounters_number 0 meters_number 0 flags 0 > flow configure 1 queues_number 1 queues_size 10 counters_number 0 aging_c= ounters_number 0 meters_number 0 flags 0 > flow configure 2 queues_number 1 queues_size 10 counters_number 0 aging_c= ounters_number 0 meters_number 0 flags 0 > flow configure 3 queues_number 1 queues_size 10 counters_number 0 aging_c= ounters_number 0 meters_number 0 flags 0 > port start all > > flow pattern_template 0 create transfer relaxed no pattern_template_id 10= template represented_port ethdev_port_id is 0 / eth type is 0x86dd / end > flow actions_template 0 create ingress actions_template_id 10 template = queue / end mask queue index 0xffff / end > flow template_table 0 create group 0 priority 0 transfer wire_orig tabl= e_id 5 rules_number 8 pattern_template 10 actions_template 10 > flow queue 0 create 0 template_table 5 pattern_template 0 actions_templat= e 0 postpone no pattern represented_port ethdev_port_id is 0 / eth type is = 0x86dd / end actions queue index 0 / end > flow push 0 queue 0 > > > The error encountered during the execution of the above test-pmd commands= is: > > > mlx5_net: [mlx5dr_action_print_combo]: Invalid action_type sequence > mlx5_net: [mlx5dr_action_print_combo]: TIR > mlx5_net: [mlx5dr_matcher_check_and_process_at]: Invalid combination in a= ction template > mlx5_net: [mlx5dr_matcher_bind_at]: Invalid at 0 > > > Upon closer inspection of the driver code in DPDK 23.11 (also the latest= DPDK main branch), it appears that the error is due to the fact that MLX5D= R_ACTION_TYP_TIR is not listed as a valid action in the MLX5DR_TABLE_TYPE_F= DB field. If the following patch is applied, the error is resolved, and the= DPDK application is able to capture matching packets: > > > diff --git a/drivers/net/mlx5/hws/mlx5dr_action.c b/drivers/net/mlx5/hws/= mlx5dr_action.c > index 862ee3e332..c444ec761e 100644 > --- a/drivers/net/mlx5/hws/mlx5dr_action.c > +++ b/drivers/net/mlx5/hws/mlx5dr_action.c > @@ -85,6 +85,7 @@ static const uint32_t action_order_arr[MLX5DR_TABLE_TYP= E_MAX][MLX5DR_ACTION_TYP_ > BIT(MLX5DR_ACTION_TYP_VPORT) | > BIT(MLX5DR_ACTION_TYP_DROP) | > BIT(MLX5DR_ACTION_TYP_DEST_ROOT) | > + BIT(MLX5DR_ACTION_TYP_TIR) | > BIT(MLX5DR_ACTION_TYP_DEST_ARRAY), > BIT(MLX5DR_ACTION_TYP_LAST), > }, > > I would greatly appreciate it if anyone could provide insight into whethe= r this behavior is intentional or if it is a bug in the driver. Many thanks= in advance. The fact that it works with this code change is not an intended behavior an= d we do not support using QUEUE and RSS actions on transfer flow tables. Also, there's another issue with table and actions template attributes: - table is using transfer, - actions template is using ingress. Using them together is incorrect. In the upcoming DPDK release, we are adding additional validations which wo= uld guard against that. With your configuration, it is enough that you create an ingress flow table= on port 0, which will contain a flow rule matching IPv6 traffic and forwarding it to a= queue on port 0. By default, any traffic which is not explicitly dropped or forwarded in E-S= witch, will be handled by ingress flow rules of the port on which this pack= et was received. Since you're running with flow isolation enabled, this means that traffic w= ill go to kernel interface, unless you explicitly match it on ingress. > > Best regards, > Tao Best regards, Dariusz Sosnowski --_000_PR2PR09MB3081DE529B91D2776EE0ED1CA3C12PR2PR09MB3081eurp_ Content-Type: text/html; charset="us-ascii" Content-Transfer-Encoding: quoted-printable

Hi D= ariusz,

 

Than= ks for your speedy reply and provided hints. I am able to capture matched p= ackets on one PF for the DPDK application by installing the following async= rules based on your suggestions.

 

&= lt;Command to install QUEUE action>

port stop all

flow configure 0 qu= eues_number 1 queues_size 10 counters_number 0 aging_counters_number 0 mete= rs_number 0 flags 0<= o:p>

flow configure 1 qu= eues_number 1 queues_size 10 counters_number 0 aging_counters_number 0 mete= rs_number 0 flags 0<= o:p>

flow configure 2 qu= eues_number 1 queues_size 10 counters_number 0 aging_counters_number 0 mete= rs_number 0 flags 0<= o:p>

flow configure 3 qu= eues_number 1 queues_size 10 counters_number 0 aging_counters_number 0 mete= rs_number 0 flags 0<= o:p>

port start all=

 

flow pattern_templa= te 0 create ingress relaxed no pattern_template_id 10  template  eth t= ype is 0x86dd  / end

flow actions_templa= te 0 create ingress  actions_template_id 10  template queue / end mask= queue index 0xffff / end

flow template_table= 0 create group 0  priority 0  ingress  table_id 5 rules_number 8 pattern_template 10 actions_= template 10

flow queue 0 create= 0 template_table 5 pattern_template 0 actions_template 0 postpone no patte= rn  eth type is 0x86dd  / end actions queue index 0 / end

 

flow= push 0 queue 0

&= lt;/Command to install QUEUE action>

 

In our application,= once the DPDK application processes the captured packets, it may need to i= nstall additional flow rules to perform decap/encap/port actions to deliver= packets from PFs to VFs. These dynamically installed flow rules might look like those shown below, which you may have= seen in my previous emails.

 

&= lt;Command to install finer matching and port action rule>

flow pattern_templa= te 0 create transfer relaxed no pattern_template_id 20  template represente= d_port ethdev_port_id is 0 / eth type is 0x86dd / ipv6 dst is ffff:ffff:ffff:ffff:fff= f:ffff:ffff:ffff / end<= span style=3D"font-size:10.5pt">

 

set raw_decap 0 eth=   / ipv6 / end_set

set raw_encap 0 eth= src is 11:22:33:44:55:66 dst is 66:9d:a7:fd:fb:43 type is 0x0800 / end_set=

 

flow actions_templa= te 0 create transfer  actions_template_id 20  template raw_decap index= 0 / raw_encap index 0 / represented_port / end mask raw_decap index 0 / ra= w_encap index 0 /  represented_port  / end

 

flow template_table= 0 create  group 0 priority 0  transfer wire_orig table_id 6 rules_number 8 pattern_template 20 act= ions_template 20

 

flow queue 0 create= 0 template_table 6 pattern_template 0 actions_template 0 postpone no patte= rn represented_port ethdev_port_id is 0 / eth type is 0x86dd / ipv6 dst is abcd:efgh:1234:= 5678:0:1:0:1  / end actions raw_decap index 0 / raw_encap index 0 /  represented_po= rt ethdev_port_id 3 / end

&= lt;/Command to install finer matching and port action rule>

 

 

In t= he synchronous installation approach, we achieved our goal by installing th= ese flow rules with finer granularity matching patterns, similar to the abo= ve, into the same table as the QUEUE action rules. As you pointed out, it is not intended to support QUEUE and RSS act= ions on transfer flow tables in async mode. Installing these decap/encap/po= rt action rules in the same table is not viable due to the ingress attribut= e of the table. Jumping between ingress and transfer tables is also not an option since they are not withi= n the same eswitch domain.

 

To s= ummarize the demands,  we need to capture a portion of packets for the= DPDK application while performing decap/encap/port actions on other portio= ns of packets on the same interface. Could you provide additional hints on how to address this use case? Thanks in ad= vance.

 

Best= regards,

Tao = Li=

 

From: Dariusz Sosnowski &= lt;dsosnowski@nvidia.com>
Date: Wednesday, 12. June 2024 at 18:08
To: Tao Li <byteocean@hotmail.com>, users@dpdk.org <users@d= pdk.org>
Cc: tao.li06@sap.com <tao.li06@sap.com>
Subject: RE: Failed to install QUEUE action using async API on Conne= ctX-6 NIC


Hi,

> From: Tao Li <byteocean@hotmail.com>
> Sent: Wednesday, June 12, 2024 16:45
> To: users@dpdk.org
> Cc: tao.li06@sap.com
> Subject: Failed to install QUEUE action using async API on ConnectX-6 = NIC
>
> Hi all,
>
> I am using the async API to install flow rules to perform the QUEUE ac= tion to capture packets matching a certain pattern for processing by a DPDK= application. The ConnectX-6 NIC is configured in multiport e-switch mode, = as outlined in the documentation (
ht= tps://doc.dpdk.org/guides/nics/mlx5.html#multiport-e-switch). Currently, I am facing an issue where I cannot create the corresponding te= mplates for this purpose. The command to start test-pmd and create pattern = and action templates are as follows:
>
> <Command to start test-pmd>
> sudo ./dpdk-testpmd -a 3b:00.0,dv_flow_en=3D2,representor=3Dpf0-1vf0 -= - -i --rxq=3D1 --txq=3D1 --flow-isolate-all
> </Command to start test-pmd>
>
> <Not working test-pmd commands>
> port stop all
> flow configure 0 queues_number 1 queues_size 10 counters_number 0 agin= g_counters_number 0 meters_number 0 flags 0
> flow configure 1 queues_number 1 queues_size 10 counters_number 0 agin= g_counters_number 0 meters_number 0 flags 0
> flow configure 2 queues_number 1 queues_size 10 counters_number 0 agin= g_counters_number 0 meters_number 0 flags 0
> flow configure 3 queues_number 1 queues_size 10 counters_number 0 agin= g_counters_number 0 meters_number 0 flags 0
> port start all
>
> flow pattern_template 0 create transfer relaxed no pattern_template_id= 10  template represented_port ethdev_port_id is 0 / eth type is 0x86d= d  / end
> flow actions_template 0 create ingress  actions_template_id 10&nb= sp; template queue / end mask queue index 0xffff / end
> flow template_table 0 create  group 0 priority 0  transfer w= ire_orig table_id 5 rules_number 8 pattern_template 10 actions_template 10<= br> > flow queue 0 create 0 template_table 5 pattern_template 0 actions_temp= late 0 postpone no pattern represented_port ethdev_port_id is 0 / eth type = is 0x86dd  / end actions queue index 0 / end
> flow push 0 queue 0
> </Not working test-pmd commands>
>
> The error encountered during the execution of the above test-pmd comma= nds is:
>
> <Encounted error>
> mlx5_net: [mlx5dr_action_print_combo]: Invalid action_type sequence > mlx5_net: [mlx5dr_action_print_combo]: TIR
> mlx5_net: [mlx5dr_matcher_check_and_process_at]: Invalid combination i= n action template
> mlx5_net: [mlx5dr_matcher_bind_at]: Invalid at 0
> </Encounted error>
>
> Upon closer inspection of the driver code in DPDK 23.11 (also  th= e latest DPDK main branch), it appears that the error is due to the fact th= at MLX5DR_ACTION_TYP_TIR is not listed as a valid action in the MLX5DR_TABL= E_TYPE_FDB field. If the following patch is applied, the error is resolved, and the DPDK application is able to cap= ture matching packets:
>
> <patch to apply>
> diff --git a/drivers/net/mlx5/hws/mlx5dr_action.c b/drivers/net/mlx5/h= ws/mlx5dr_action.c
> index 862ee3e332..c444ec761e 100644
> --- a/drivers/net/mlx5/hws/mlx5dr_action.c
> +++ b/drivers/net/mlx5/hws/mlx5dr_action.c
> @@ -85,6 +85,7 @@ static const uint32_t action_order_arr[MLX5DR_TABLE_= TYPE_MAX][MLX5DR_ACTION_TYP_
>            = ;     BIT(MLX5DR_ACTION_TYP_VPORT) |
>            = ;     BIT(MLX5DR_ACTION_TYP_DROP) |
>            = ;     BIT(MLX5DR_ACTION_TYP_DEST_ROOT) |
> +           &nb= sp;   BIT(MLX5DR_ACTION_TYP_TIR) |
>            = ;     BIT(MLX5DR_ACTION_TYP_DEST_ARRAY),
>            = ;     BIT(MLX5DR_ACTION_TYP_LAST),
>         },
> </patch to apply>
> I would greatly appreciate it if anyone could provide insight into whe= ther this behavior is intentional or if it is a bug in the driver. Many tha= nks in advance.

The fact that it works with this code change is not an intended behavior an= d we do not support using QUEUE and RSS actions on transfer flow tables.
Also, there's another issue with table and actions template attributes:

- table is using transfer,
- actions template is using ingress.

Using them together is incorrect.
In the upcoming DPDK release, we are adding additional validations which wo= uld guard against that.

With your configuration, it is enough that you create an ingress flow table= on port 0,
which will contain a flow rule matching IPv6 traffic and forwarding it to a= queue on port 0.

By default, any traffic which is not explicitly dropped or forwarded in E-S= witch, will be handled by ingress flow rules of the port on which this pack= et was received.
Since you're running with flow isolation enabled, this means that traffic w= ill go to kernel interface, unless you explicitly match it on ingress.

>
> Best regards,
> Tao

Best regards,
Dariusz Sosnowski

--_000_PR2PR09MB3081DE529B91D2776EE0ED1CA3C12PR2PR09MB3081eurp_--