From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 7C04943D5B for ; Tue, 26 Mar 2024 20:43:08 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 00F9940E72; Tue, 26 Mar 2024 20:43:08 +0100 (CET) Received: from NAM11-CO1-obe.outbound.protection.outlook.com (mail-co1nam11on2111.outbound.protection.outlook.com [40.107.220.111]) by mails.dpdk.org (Postfix) with ESMTP id AE65F40E4A for ; Tue, 26 Mar 2024 20:43:06 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=OFjDxqKxH57oi29O+0V4/hlgi7mmbR2KJzFCkQekFdsDIKfw1R/0MXJsqt6jWDSQ1iKpVFIonmudNzB82Z3zJPHu4kaQtZK5wkdWMIoOsV2sMQthL0lFTeXVX2T/Qu485i3fJ9qSnXz3gaQdqxDFBWqSO6ft+rqT2RDkTKqq4oFuNRBYJHzK8ltJVdYPgoeb6LoANa9hf3RQI1QVYxuE39MWl0WQTmOfXMUuemI3AvxxvRWRPSaqV1E/BM8iy6rpVLkq4m7m4nftNhlbBBlF0a8F0NWhZBvfNEFUsM/xB4qP+SzdozmlR2iMWWVe5GEBUZ6s3hGYekfsT83JB1+e5Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=US/hqLiaeGCNGKZQWz+Ghqd+MV6JCTzxJ36e7GwBsM8=; b=jg1nlSFUtmc1qILI5kHZMhhvg37JmyfnGozuZitnYHA3JiklDF97gxp2SkXGbAihek6jnV6aaWpbhsFCXrzdFCbnHCQBu82zuwGSk7M7IFYc8ke4KRJVDs5kUiZZRfzpb8mgZPCHmS3Tb8hgXsaR53797eA8HeYpMpJsCYHNomaFyYpz7Yd92otd9SAGuczz+ZUPWGMWVhA9P3Vrw32Mz1scfcwYLTtskz4ZU+vgppY9jguAC9SpOCAXpaejDSC1Q6zGs2LncXwHksmqTiHuL+oI/rJdG48SRzNcMdqgj7091rA6mdaX0Ux9GBGH1zv7YKodlOehGbMyhTtv6j9b+A== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=nvidia.com; dmarc=pass action=none header.from=nvidia.com; dkim=pass header.d=nvidia.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=US/hqLiaeGCNGKZQWz+Ghqd+MV6JCTzxJ36e7GwBsM8=; b=LOm4m8m6hanE5DwhX92WUoySWhvHMnAX9VGXn6/4pibCQXuicQyWFkHkIRCY1DdLe2BZDDTop+nEVsFd39UjxwMeSTZ9m2LibQzPT4RcWt2Z3gq/RFXP/dY+evkDg/RR1+GuJGQCOdrrImvXeSxpQ72sBbyNeZP+ZraY7sCjOh9w+8RwLDewoTyJaHhWlmdxuE3abYw9QtRTML6HizjePeDkp8vf0ijrnzb1kPXdxmNPZ8ooDnlDTp1ALxfkqv6vk/W5UEu/1tI4N8YvDKwmj1P4o/zazjiBM5RlhRPvlsA4OHM3YtWDTPZQQHS4TsN65zpaI/cQLjeiPXnZ2/hnnw== Received: from SA3PR12MB8811.namprd12.prod.outlook.com (2603:10b6:806:312::18) by SJ2PR12MB8956.namprd12.prod.outlook.com (2603:10b6:a03:53a::6) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7409.31; Tue, 26 Mar 2024 19:43:03 +0000 Received: from SA3PR12MB8811.namprd12.prod.outlook.com ([fe80::d312:f7dc:95e9:5674]) by SA3PR12MB8811.namprd12.prod.outlook.com ([fe80::d312:f7dc:95e9:5674%2]) with mapi id 15.20.7409.028; Tue, 26 Mar 2024 19:43:03 +0000 From: Asaf Penso To: Tao Li , "users@dpdk.org" Subject: Re: Finer matching granularity with async template API Thread-Topic: Finer matching granularity with async template API Thread-Index: AQHae6BPNEwcGktAFESopA2jvtTxHrFCkUNugAAAdQaAAQY17oAARUEjgAaWqAw= Date: Tue, 26 Mar 2024 19:43:03 +0000 Message-ID: References: In-Reply-To: Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-ms-publictraffictype: Email x-ms-traffictypediagnostic: SA3PR12MB8811:EE_|SJ2PR12MB8956:EE_ x-ms-exchange-senderadcheck: 1 x-ms-exchange-antispam-relay: 0 x-microsoft-antispam: BCL:0; x-microsoft-antispam-message-info: fS+PaFW6AVV7Zi51EA8YCaNGmTP0a8BzrPUCZmS1/BUOk7IRgcN08u+kBwDHN0+2GPlLsxKV+HxhQEJWkaNN6hBXIfvIfy64BsD6MYETlslCpsNQIUtwlwcBg72pw9hCUL6TIFHSeIuIk49Ai5YhzW0YI02EvTUyDJDfmOaU+49F5UYwOPrFJVaQWNpCe6xVPonzXXZIW+kC4utHFCgVRwD0YzChft69TNJ4WDgIUIK/bvJCjcWemsKAs15ffY0ya6RmWczS3O9xptfDaaRXjo7kFhdFfOiGT14w1a4x6aqmGQbpFqnNpSArWMr5n1SymgkBKDqaT0RtjVCp9sVtLksLkTzbUCYiVjziSKPA+4os4zmL1H1n0uWrmlXTtHz91vZfL77SDx5uxIH9ahihAYgd36TSQe9pLaNPjbQ8+wDQC7/JrgHazV60sZPEaKmm8PU0EcWmpAAZNyCLl0/PwkHTuB8jdEIlQuwcnQx+6NLfQdEgqhFZIv58aSBH1ShMfooIIZ238GR1sGXCmcIxY/Ezzs0Pw03ISCE1kfVHZy7CLBIonXMrqN2v9qrhUbpCdRkGgr9Nuy2QDZo1rQnAJysO9G4+HK/VmWJonLYh2Uqh9AQxrwZFbmHKHZbq12ajXc9BWrRDLPSgB3BuYfGd77dnaWKOnVdYIJUR9JaGMuw= x-forefront-antispam-report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:SA3PR12MB8811.namprd12.prod.outlook.com; PTR:; CAT:NONE; SFS:(13230031)(1800799015)(376005)(366007); DIR:OUT; SFP:1102; x-ms-exchange-antispam-messagedata-chunkcount: 1 x-ms-exchange-antispam-messagedata-0: =?Windows-1252?Q?tPWyGm8x0BfarIkZjobXenUJ6SmHBJmT8TMomR+VUVu04J/lXc7MgBif?= =?Windows-1252?Q?2Yx0CoogjXzkVGC/LcWpfshuhwh+IFZBT70ZQVd4qAowdT5CO7eDAYvP?= =?Windows-1252?Q?nkknJPsC3yxwQ+zEL+OGhMK2RIl7r1YFaCdkWK5AKLlLRGf43LO6p0WP?= =?Windows-1252?Q?qPTpryo3XUl9QAZjj/tGJSYqe2lWwHmymjhbaHv1kdzTcc3J66Dtlnev?= =?Windows-1252?Q?cHx2HRqlJ2KU3hfvT2uyZPF+niIwAFiIzOABAbNKieQO4tEiYbiUCL3U?= =?Windows-1252?Q?3yjVKHKOU6c+g+ovIureL3luSalaSkaFB6U8+tp+as1RakorfP1AjvSF?= =?Windows-1252?Q?c4KrMOiywBBNSpKvJih3sQIUh/x0b1hLxslknx4a9CwdfEsFPOTJ5thk?= =?Windows-1252?Q?75FfUvTCPL0bjNyqIJOqVDllqRa3tCGQL1JvJ273Lo/lsBGTaXlb2V1c?= =?Windows-1252?Q?0N/9+IcJJx1lawNrnONNh6eh0Vc86fgqaSPp95v30/0ne/YLXvEATV4s?= =?Windows-1252?Q?Ei64GlfaG9BOmY8u6aetOoOY+jM4BnIJ2molHn7i6nBoEgV9qkGFG/ih?= =?Windows-1252?Q?hdbxIFPtMB4MhEGv+vzK2evCFsjpOCvd4voxlmp04iCQH5lsYKGlusOP?= =?Windows-1252?Q?8PVeWdwZNW9Gfls7dfcsqQnCy771vTt4anaghsxloyOQKxJjjVTSFubp?= =?Windows-1252?Q?EnUqohLM23m8Tj6ZRTCrxsrYj0/H8KJWkwT/uXPwguNGEw358m87UYM5?= =?Windows-1252?Q?+tsPM8Z7GvRsOaFzHyuxbozP/T0zaSCscjbnPen45fDjXZ3BWslkOeWq?= =?Windows-1252?Q?TZWwWWzpKA8h9WEOnpE2t/ti0c7bZ6wqAMELXJ4aSXyb0xPdG3DUpzfK?= =?Windows-1252?Q?Y9U8VvYCv99jGpVKMP6aGbvvmDsoLGcGvctPj/KziSXpgsEfRdJo2Owc?= =?Windows-1252?Q?PR0XYXTXijrn+EkPjJJil215HnnXUhoEgVxulBZ/BV1VNtqOJHvXIV9V?= =?Windows-1252?Q?GibyhUuvr9Nty7moMxmOKX2BYR072qv70+QDLimThwwJcbwQ2L3ZxeCt?= =?Windows-1252?Q?1kSR+DPiT+GpgXCboWL13V8FyI32fDUk3lJ6koe0AU5nPrRCX6V/jMpB?= =?Windows-1252?Q?v1fJxwjJSV7DbTQNbvHwKPdcw0bRouNjdn/pjeuoSoy1R0PRFmT/rsNi?= =?Windows-1252?Q?AOJzdqtKJ9rY5IpBvzAM/hdkJSpjleldVNxHHBYJQIWwSH+TlaQV1MP4?= =?Windows-1252?Q?UfpXkRKpD9q8Q7rIWbMU37tRvZTqDzLs4PtPBbejaSKIOc21eJJq19MF?= =?Windows-1252?Q?syWMOKJIbcx4B5sn+35laE9sK1hzw4TMhJGxDA/2I6jUO3/AvLG/fLs1?= =?Windows-1252?Q?xklrHr24zwrX/WNLtlGY9LK/p1YfpsR/FRR9uyRKp2whO8UIKZ8U9fau?= =?Windows-1252?Q?b0hbefacHqb8q3X4dTZVo2brJPzpY4jsJB2WQEnk1or+7ysvT9ugJyri?= =?Windows-1252?Q?GvMLaVhiqVyBtYGlpvRLfRpeRw10G5hzPbTODzshV/ef+C82j99VMO0w?= =?Windows-1252?Q?weqMcXUSeuTWjvU7FcjOVWIvBrKNMAhbtVdWI4fEm9IzwfGXFNJNnHf3?= =?Windows-1252?Q?XBo75E83d4OspCkv0FRRW08iGvjgCtEUPPmkMat5kzjtT6xJ9WKuj/YD?= =?Windows-1252?Q?xjQqIdXaSytaqo53GhbzbEjD9oMOsWsfl6r87XvFT2SY2mWoq++Rkg?= =?Windows-1252?Q?=3D=3D?= Content-Type: multipart/alternative; boundary="_000_SA3PR12MB881149D5FE54A1D6FCA84728CD352SA3PR12MB8811namp_" MIME-Version: 1.0 X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-AuthSource: SA3PR12MB8811.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-Network-Message-Id: aaddfbbc-ef5e-4342-5a53-08dc4dccf350 X-MS-Exchange-CrossTenant-originalarrivaltime: 26 Mar 2024 19:43:03.0440 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-mailboxtype: HOSTED X-MS-Exchange-CrossTenant-userprincipalname: I6DylzP+mn/K/rHfc+rHK2eqvpg+myXilITiYglEGVsaHSDNAnpfSBk/cqoQdiPhPbNqApUOrwCigGfVRV5Grw== X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ2PR12MB8956 X-BeenThere: users@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK usage discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: users-bounces@dpdk.org --_000_SA3PR12MB881149D5FE54A1D6FCA84728CD352SA3PR12MB8811namp_ Content-Type: text/plain; charset="Windows-1252" Content-Transfer-Encoding: quoted-printable Hello Tao, Currently, we don't support IPinIP with template API. We have it in our roadmap, but still no concrete release date for it. Regards, Asaf Penso ________________________________ From: Tao Li Sent: Friday, March 22, 2024 5:08:46 pm To: Asaf Penso ; users@dpdk.org Subject: Re: Finer matching granularity with async template API Hello Asaf, We generate incoming IPinIP packets by using our complex solution, but belo= w you can find a Python script to generate such packets to serve this purpo= se. I hope it is helpful to reproduce this issue. Thanks again. #!/usr/bin/python3 from scapy.all import * from scapy.layers.inet import Ether, UDP, ICMP from scapy.layers.inet6 import * ether =3D Ether() ether.src =3D "src mac" ether.dst =3D "dst mac" ether.type =3D 0x86DD ipv6 =3D IPv6() ipv6.src =3D "src ipv6 addr" ipv6.dst =3D "dst ipv6 addr" ipv6.nh =3D 4 pkt =3D ether / ipv6 / IP(src=3D"192.168.129.5",dst=3D"172.32.4.9") / ICMP(= type=3D8) print(pkt.show()) sendp(pkt, iface =3D"ens1f0np0") < /Code snippet to generate IPinIP packets > Cheers, Tao From: Tao Li Date: Friday, 22. March 2024 at 14:19 To: Asaf Penso , users@dpdk.org Subject: Re: Finer matching granularity with async template API Hellp Asaf, Thanks for your speedy reply. Please find additional information based on y= our questions, and I hope they would help to understand our purpose and iss= ue. 1. Why ipv6/ipv4/icmp? We are performing IPinIP tunnelling for traffic, and in this provided test-= pmd example we encapsulate IPv4 packets from VMs into IPv6 underlay packets= . The refence RFCs for this approach are RFC 1853 and RFC 2473. This articl= e also provides goo= d visualization on packet structures for this IPinIP tunnelling approach. 1. What output /error message? No crashing error message or similar happens, thus it is difficult for us t= o debug what is exactly going on. What is observed is that incoming packets= cannot be captured and processed by this flow rule, compared with using t= he flow rule only performs eth/ipv6 matching. After removing relevant comma= nds or code that perform inner header matching for IPv4 and ICMP, packets c= an be successfully processed. The code snippets to programmably achieve the= above described IPinIP tunnelling approach are as following: static const struct rte_flow_item_eth flow_item_eth_mask =3D { .hdr.ether_type =3D 0xffff, }; static const struct rte_flow_item_ipv6 flow_item_ipv6_dst_mask =3D { .hdr.proto =3D 0xff, }; static const struct rte_flow_item_ipv4 flow_item_ipv4_proto_mask =3D { .hdr.next_proto_id =3D 0xff, }; static const struct rte_flow_item_icmp flow_item_icmp_mask =3D { .hdr.icmp_type =3D 0xff, }; // pattern template struct rte_flow_item pattern[] =3D { [0] =3D {.type =3D RTE_FLOW_ITEM_TYPE_REPRE= SENTED_PORT, .mask =3D &represented_port_mask}, [1] =3D {.type =3D RTE_FLOW_ITEM_TYPE_ETH, = .mask =3D &flow_item_eth_mask}, [2] =3D {.type =3D RTE_FLOW_ITEM_TYPE_IPV6,= .mask =3D &flow_item_ipv6_dst_mask}, [3] =3D {.type =3D RTE_FLOW_ITEM_TYPE_IPV4,= .mask =3D &flow_item_ipv4_proto_mask}, [4] =3D {.type =3D RTE_FLOW_ITEM_TYPE_ICMP,= .mask =3D &flow_item_icmp_mask}, [5] =3D {.type =3D RTE_FLOW_ITEM_TYPE_END,}= , }; port_template_info_pf.pattern_templates[0] =3D create_patte= rn_template(main_eswitch_port, pattern); struct rte_flow_item_eth eth_pattern =3D {.type =3D htons(0= x86DD)}; struct rte_flow_item_ipv6 ipv6_hdr =3D {0}; ipv6_hdr.hdr.proto =3D IPPROTO_IPIP; struct rte_flow_item_ipv4 ipv4_hdr =3D {0}; ipv4_hdr.hdr.next_proto_id =3D IPPROTO_ICMP; struct rte_flow_item_icmp icmp_hdr =3D {0}; icmp_hdr.hdr.icmp_type =3D RTE_IP_ICMP_ECHO_REQUEST; struct rte_flow_item_ethdev represented_port =3D {.port_id = =3D pf_port_id}; struct rte_flow_item concrete_patterns[6]; concrete_patterns[0].type =3D RTE_FLOW_ITEM_TYPE_REPRESENTE= D_PORT; concrete_patterns[0].spec =3D &represented_port; concrete_patterns[0].mask =3D NULL; concrete_patterns[0].last =3D NULL; concrete_patterns[1].type =3D RTE_FLOW_ITEM_TYPE_ETH; concrete_patterns[1].spec =3D ð_pattern; concrete_patterns[1].mask =3D NULL; concrete_patterns[1].last =3D NULL; concrete_patterns[2].type =3D RTE_FLOW_ITEM_TYPE_IPV6; concrete_patterns[2].spec =3D &ipv6_hdr; concrete_patterns[2].mask =3D NULL; concrete_patterns[2].last =3D NULL; concrete_patterns[3].type =3D RTE_FLOW_ITEM_TYPE_IPV4; concrete_patterns[3].spec =3D &ipv4_hdr; concrete_patterns[3].mask =3D NULL; concrete_patterns[3].last =3D NULL; concrete_patterns[4].type =3D RTE_FLOW_ITEM_TYPE_ICMP; concrete_patterns[4].spec =3D &icmp_hdr; concrete_patterns[4].mask =3D NULL; concrete_patterns[4].last =3D NULL; concrete_patterns[5].type =3D RTE_FLOW_ITEM_TYPE_END; concrete_patterns[5].spec =3D NULL; concrete_patterns[5].mask =3D NULL; concrete_patterns[5].last =3D NULL; Looking forward to your further support, and many thanks in advance. Best regards, Tao From: Asaf Penso Date: Thursday, 21. March 2024 at 20:18 To: Tao Li , users@dpdk.org Subject: Re: Finer matching granularity with async template API BTW, In the non working example I see ipv6 / ipv4 / ICMP. Was this your intentio= n or did you mean ipv6 / ICMP? Regards, Asaf Penso ________________________________ From: Asaf Penso Sent: Thursday, March 21, 2024 9:17:04 PM To: Tao Li ; users@dpdk.org Subject: Re: Finer matching granularity with async template API Hello Tao, What is the output / error message you get? Regards, Asaf Penso ________________________________ From: Tao Li Sent: Thursday, March 21, 2024 5:44:00 PM To: users@dpdk.org Subject: Finer matching granularity with async template API Hi all, I am using async template API to install flow rules to perform actions on p= ackets to achieve IP(v4)inIP(v6) tunnelling. Currently I am facing an issue= where I cannot perform incoming traffic matching with finer granularity. T= he test-pmd commands in use are as following: port stop all flow configure 0 queues_number 4 queues_size 64 counters_number 0 aging_cou= nters_number 0 meters_number 0 flags 0 # PF0 flow configure 1 queues_number 4 queues_size 64 counters_number 0 aging_cou= nters_number 0 meters_number 0 flags 0 flow configure 2 queues_number 4 queues_size 64 counters_number 0 aging_cou= nters_number 0 meters_number 0 flags 0 flow configure 3 queues_number 4 queues_size 64 counters_number 0 aging_cou= nters_number 0 meters_number 0 flags 0 # PF1V0 port start all set verbose 1 flow pattern_template 0 create transfer relaxed no pattern_template_id 10 = template represented_port ethdev_port_id is 0 / eth / ipv6 / ipv4 / icmp = / end set raw_decap 0 eth / ipv6 / end_set set raw_encap 0 eth src is 11:22:33:44:55:66 dst is 66:9d:a7:fd:fb:43 type = is 0x0800 / end_set flow actions_template 0 create transfer actions_template_id 10 template r= aw_decap index 0 / raw_encap index 0 / represented_port / end mask raw_deca= p index 0 / raw_encap index 0 / represented_port / end flow template_table 0 create group 0 priority 0 transfer wire_orig table_= id 5 rules_number 8 pattern_template 10 actions_template 10 flow queue 0 create 0 template_table 5 pattern_template 0 actions_template = 0 postpone no pattern represented_port ethdev_port_id is 0 / eth / ipv6 /= ipv4 / icmp / end actions raw_decap index 0 / raw_encap index 0 / repres= ented_port ethdev_port_id 3 / end flow push 0 queue 0 Once I remove matching patterns for the inner packet headers( ipv4 / icmp) = as following, I can see the processed packets inside VMs using tcpdump. =85 flow pattern_template 0 create transfer relaxed no pattern_template_id 10 = template represented_port ethdev_port_id is 0 / eth / ipv6 / end =85 flow queue 0 create 0 template_table 5 pattern_template 0 actions_template = 0 postpone no pattern represented_port ethdev_port_id is 0 / eth / ipv6 = / end actions raw_decap index 0 / raw_encap index 0 / represented_port eth= dev_port_id 3 / end =85 Similar combination works when using the synchronous rte_flow API. Any comm= ent or suggestion on this issue is much appreciated. Many thanks in advance= . Best regards, Tao --_000_SA3PR12MB881149D5FE54A1D6FCA84728CD352SA3PR12MB8811namp_ Content-Type: text/html; charset="Windows-1252" Content-Transfer-Encoding: quoted-printable
Hello Tao,

Currently, we don't support IPinIP with template API.
We have it in our roadmap, but still no concrete release = date for it.

Regards,
Asaf Penso


From:= Tao Li <byteocean@hotmail.com>
Sent: Friday, March 22, 2024 5:08:46 pm
To: Asaf Penso <asafp@nvidia.com>; users@dpdk.org &l= t;users@dpdk.org>
Subject: Re: Finer matching granularity with async templat= e API

Hello Asaf,

 

We generate incoming IPinIP packets by usi= ng our complex solution, but below you can find a Python script to generate= such packets to serve this purpose. I hope it is helpful to reproduce this= issue. Thanks again.

 

<Code snippet to generate IPinIP pac= kets>

#!/usr/bin/python3

from scapy.all import *

from scapy.layers.inet import Ether, UDP, = ICMP

from scapy.layers.inet6 import *

 

ether =3D Ether()

ether.src =3D "src mac"

ether.dst =3D "dst mac"

ether.type =3D 0x86DD

 

ipv6 =3D IPv6()

ipv6.src =3D "src ipv6 addr"

ipv6.dst =3D "dst ipv6 addr"

ipv6.nh =3D 4

 

pkt =3D ether / ipv6 / IP(src=3D"192.= 168.129.5",dst=3D"172.32.4.9") / ICMP(type=3D8)

 

print(pkt.show())

sendp(pkt, iface =3D"ens1f0np0")=

< /Code snippet to generate IPinIP p= ackets >

 

Cheers,

Tao

 

From: Tao Li <byteocean@hotmail.com>=
Date: Friday, 22. March 2024 at 14:19
To: Asaf Penso <asafp@nvidia.com>, users@dpdk.org <users@dp= dk.org>
Subject: Re: Finer matching granularity with async template API

Hellp Asaf,

 

Thanks for your speedy reply. Please find = additional information based on your questions, and I hope they would help = to understand our purpose and issue.

 

  1. Why ipv6/ipv4/icmp?

We are performing IPinIP tunnelling for tr= affic, and in this provided test-pmd example we encapsulate IPv4 packets fr= om VMs into IPv6 underlay packets. The refence RFCs for this approach are R= FC 1853 and RFC 2473. This article also provides good visualization on packet structures for this = IPinIP tunnelling approach.

 

  1. What output /error message?

No crashing error message or similar happe= ns, thus it is difficult for us to debug what is exactly going on. What is = observed is that incoming packets cannot be captured and processed by this flow rule,  compared w= ith using the flow rule only performs eth/ipv6 matching. After removing rel= evant commands or code that perform inner header matching for IPv4 and ICMP, packet= s can be successfully processed. The code snippets to programmably achieve = the above described IPinIP tunnelling approach are as following:

 

<Code snippet to initialise pattern = masks>

static const struct rte_flow_item_eth flow= _item_eth_mask =3D {

       =          .hdr.ether_type =3D 0xffff= ,

};=

 

static const struct rte_flow_item_ipv6 flo= w_item_ipv6_dst_mask =3D {

       =          .hdr.proto =3D 0xff,

};=

 

static const struct rte_flow_item_ipv4 flo= w_item_ipv4_proto_mask =3D {=

       =          .hdr.next_proto_id =3D 0xf= f,

};=

 

static const struct rte_flow_item_icmp flo= w_item_icmp_mask =3D {

       =          .hdr.icmp_type =3D 0xff,

};=

</Code snippet to initialise pattern= masks>

 

<Code snippet to create pattern temp= late>

       =          // pattern template=

       =          struct rte_flow_item patte= rn[] =3D {

       =             &nb= sp;            [0] = =3D {.type =3D RTE_FLOW_ITEM_TYPE_REPRESENTED_PORT, .mask =3D &represen= ted_port_mask},

       =             &nb= sp;            [1] = =3D {.type =3D RTE_FLOW_ITEM_TYPE_ETH, .mask =3D &flow_item_eth_mask},<= /span>

       =             &nb= sp;            [2] = =3D {.type =3D RTE_FLOW_ITEM_TYPE_IPV6, .mask =3D &flow_item_ipv6_dst_m= ask},

       =             &nb= sp;            [3] = =3D {.type =3D RTE_FLOW_ITEM_TYPE_IPV4, .mask =3D &flow_item_ipv4_proto= _mask},

       =             &nb= sp;            [4] = =3D {.type =3D RTE_FLOW_ITEM_TYPE_ICMP, .mask =3D &flow_item_icmp_mask}= ,

       =             &nb= sp;            [5] = =3D {.type =3D RTE_FLOW_ITEM_TYPE_END,},

       =          };

       =          port_template_info_pf.patt= ern_templates[0] =3D create_pattern_template(main_eswitch_port, pattern);

</Code snippet to create pattern tem= plate>

 

<Code snippet to create patterns>=

       =          struct rte_flow_item_eth e= th_pattern =3D {.type =3D htons(0x86DD)};

 

       =          struct rte_flow_item_ipv6 = ipv6_hdr =3D {0};

       =          ipv6_hdr.hdr.proto =3D IPP= ROTO_IPIP;

 

       =          struct rte_flow_item_ipv4 = ipv4_hdr =3D {0};

       =          ipv4_hdr.hdr.next_proto_id= =3D IPPROTO_ICMP;

 

       =          struct rte_flow_item_icmp = icmp_hdr =3D {0};

       =          icmp_hdr.hdr.icmp_type =3D= RTE_IP_ICMP_ECHO_REQUEST;

 

       =          struct rte_flow_item_ethde= v represented_port =3D {.port_id =3D pf_port_id};

 

       =          struct rte_flow_item concr= ete_patterns[6];

 

       =          concrete_patterns[0].type = =3D RTE_FLOW_ITEM_TYPE_REPRESENTED_PORT;

       =          concrete_patterns[0].spec = =3D &represented_port;

       =          concrete_patterns[0].mask = =3D NULL;

       =          concrete_patterns[0].last = =3D NULL;

 

 

       =          concrete_patterns[1].type = =3D RTE_FLOW_ITEM_TYPE_ETH;<= /p>

       =          concrete_patterns[1].spec = =3D &eth_pattern;

       =          concrete_patterns[1].mask = =3D NULL;

       =          concrete_patterns[1].last = =3D NULL;

 

       =          concrete_patterns[2].type = =3D RTE_FLOW_ITEM_TYPE_IPV6;=

       =          concrete_patterns[2].spec = =3D &ipv6_hdr;

       =          concrete_patterns[2].mask = =3D NULL;

       =          concrete_patterns[2].last = =3D NULL;

 

       =          concrete_patterns[3].type = =3D RTE_FLOW_ITEM_TYPE_IPV4;=

       =          concrete_patterns[3].spec = =3D &ipv4_hdr;

       =          concrete_patterns[3].mask = =3D NULL;

       =          concrete_patterns[3].last = =3D NULL;

 

       =          concrete_patterns[4].type = =3D RTE_FLOW_ITEM_TYPE_ICMP;=

       =          concrete_patterns[4].spec = =3D &icmp_hdr;

       =          concrete_patterns[4].mask = =3D NULL;

       =          concrete_patterns[4].last = =3D NULL;

 

       =          concrete_patterns[5].type = =3D RTE_FLOW_ITEM_TYPE_END;<= /p>

       =          concrete_patterns[5].spec = =3D NULL;

       =          concrete_patterns[5].mask = =3D NULL;

       =          concrete_patterns[5].last = =3D NULL;

</Code snippet to create patterns>= ;

 

 

Looking forward to your further support, a= nd many thanks in advance.

 

Best regards,

Tao

 

 

From: Asaf Penso <asafp@nvidia.com><= br> Date: Thursday, 21. March 2024 at 20:18
To: Tao Li <byteocean@hotmail.com>, users@dpdk.org <users@d= pdk.org>
Subject: Re: Finer matching granularity with async template API

BTW,

In the non working example I see ipv6 / ip= v4 / ICMP. Was this your intention or did you mean ipv6 / ICMP?

 

Regards,

Asaf Penso


From: Asaf Penso <asafp@nvidia.com>
Sent: Thursday, March 21, 2024 9:17:04 PM
To: Tao Li <byteocean@hotmail.com>; users@dpdk.org <users@d= pdk.org>
Subject: Re: Finer matching granularity with async template API

 

Hello Tao,

 

What is the output / error message you get= ?

 

 

Regards,

Asaf Penso


From: Tao Li <byteocean@hotmail.com>
Sent: Thursday, March 21, 2024 5:44:00 PM
To: users@dpdk.org <users@dpdk.org>
Subject: Finer matching granularity with async template API

 

Hi all,

 

I am using async template A= PI to install flow rules to perform actions on packets to achieve IP(v4)inI= P(v6) tunnelling. Currently I am facing an issue where I cannot perform inc= oming traffic matching with finer granularity. The test-pmd commands in use are as following:

 

<Not working test-pmd commands>

port stop all

 

flow configure 0 queues_number 4 queues_size 64 counters_number 0 aging_cou= nters_number 0 meters_number 0 flags 0   # PF0

 

flow configure 1 queues_number 4 queues_size 64 counters_number 0 aging_cou= nters_number 0 meters_number 0 flags 0

 

flow configure 2 queues_number 4 queues_size 64 counters_number 0 aging_cou= nters_number 0 meters_number 0 flags 0

 

flow configure 3 queues_number 4 queues_size 64 counters_number 0 aging_cou= nters_number 0 meters_number 0 flags 0  # PF1V0

 

port start all

set verbose 1

 

flow pattern_template 0 create transfer relaxed no pattern_template_id 10&n= bsp; template represented_port ethdev_port_id is 0 / eth  / ipv6 / ipv4 / icmp  / end

 

set raw_decap 0 eth  / ipv6 / end_set=

set raw_encap 0 eth src is 11:22:33:44:55:66 dst is 66:9d:a7:fd:fb:43 type = is 0x0800 / end_set

 

flow actions_template 0 create transfer  actions_template_id 10  = template raw_decap index 0 / raw_encap index 0 / represented_port / end mas= k raw_decap index 0 / raw_encap index 0 /  represented_port  / en= d

 

flow template_table 0 create  group 0 priority 0  transfer wire_o= rig table_id 5 rules_number 8 pattern_template 10 actions_template 10

 

flow queue 0 create 0 template_table 5 pattern_template 0 actions_template = 0 postpone no pattern represented_port ethdev_port_id is 0 / eth  / ip= v6  / ipv4 / icmp  / end actions raw_decap index 0 / raw_encap inde= x 0 /  represented_port ethdev_port_id 3 / end

 

flow push 0 queue 0

</Not working test-pmd commands>

 

Once I remove matching patt= erns for the inner packet headers( ipv4 / icmp) as following, I can see the= processed packets inside VMs using tcpdump.

 

<Working test-pmd commands>

=85

flow pattern_template 0 create transfer relaxed no pattern_template_id 10&n= bsp; template represented_port ethdev_port_id is 0 / eth  / ipv6 / end=

=85

flow queue 0 create 0 template_table 5 pattern_template 0 actions_template = 0 postpone no pattern represented_port ethdev_port_id is 0 / eth  / ip= v6   / end actions raw_decap index 0 / raw_encap index 0 /  = represented_port ethdev_port_id 3 / end

=85

</Working test-pmd commands>

 

Similar combination works w= hen using the synchronous rte_flow API. Any comment or suggestion on this i= ssue is much appreciated. Many thanks in advance.

 

Best regards,

Tao

 

 


--_000_SA3PR12MB881149D5FE54A1D6FCA84728CD352SA3PR12MB8811namp_--