Hellp Asaf, Thanks for your speedy reply. Please find additional information based on your questions, and I hope they would help to understand our purpose and issue. 1. Why ipv6/ipv4/icmp? We are performing IPinIP tunnelling for traffic, and in this provided test-pmd example we encapsulate IPv4 packets from VMs into IPv6 underlay packets. The refence RFCs for this approach are RFC 1853 and RFC 2473. This article also provides good visualization on packet structures for this IPinIP tunnelling approach. 1. What output /error message? No crashing error message or similar happens, thus it is difficult for us to debug what is exactly going on. What is observed is that incoming packets cannot be captured and processed by this flow rule, compared with using the flow rule only performs eth/ipv6 matching. After removing relevant commands or code that perform inner header matching for IPv4 and ICMP, packets can be successfully processed. The code snippets to programmably achieve the above described IPinIP tunnelling approach are as following: static const struct rte_flow_item_eth flow_item_eth_mask = { .hdr.ether_type = 0xffff, }; static const struct rte_flow_item_ipv6 flow_item_ipv6_dst_mask = { .hdr.proto = 0xff, }; static const struct rte_flow_item_ipv4 flow_item_ipv4_proto_mask = { .hdr.next_proto_id = 0xff, }; static const struct rte_flow_item_icmp flow_item_icmp_mask = { .hdr.icmp_type = 0xff, }; // pattern template struct rte_flow_item pattern[] = { [0] = {.type = RTE_FLOW_ITEM_TYPE_REPRESENTED_PORT, .mask = &represented_port_mask}, [1] = {.type = RTE_FLOW_ITEM_TYPE_ETH, .mask = &flow_item_eth_mask}, [2] = {.type = RTE_FLOW_ITEM_TYPE_IPV6, .mask = &flow_item_ipv6_dst_mask}, [3] = {.type = RTE_FLOW_ITEM_TYPE_IPV4, .mask = &flow_item_ipv4_proto_mask}, [4] = {.type = RTE_FLOW_ITEM_TYPE_ICMP, .mask = &flow_item_icmp_mask}, [5] = {.type = RTE_FLOW_ITEM_TYPE_END,}, }; port_template_info_pf.pattern_templates[0] = create_pattern_template(main_eswitch_port, pattern); struct rte_flow_item_eth eth_pattern = {.type = htons(0x86DD)}; struct rte_flow_item_ipv6 ipv6_hdr = {0}; ipv6_hdr.hdr.proto = IPPROTO_IPIP; struct rte_flow_item_ipv4 ipv4_hdr = {0}; ipv4_hdr.hdr.next_proto_id = IPPROTO_ICMP; struct rte_flow_item_icmp icmp_hdr = {0}; icmp_hdr.hdr.icmp_type = RTE_IP_ICMP_ECHO_REQUEST; struct rte_flow_item_ethdev represented_port = {.port_id = pf_port_id}; struct rte_flow_item concrete_patterns[6]; concrete_patterns[0].type = RTE_FLOW_ITEM_TYPE_REPRESENTED_PORT; concrete_patterns[0].spec = &represented_port; concrete_patterns[0].mask = NULL; concrete_patterns[0].last = NULL; concrete_patterns[1].type = RTE_FLOW_ITEM_TYPE_ETH; concrete_patterns[1].spec = ð_pattern; concrete_patterns[1].mask = NULL; concrete_patterns[1].last = NULL; concrete_patterns[2].type = RTE_FLOW_ITEM_TYPE_IPV6; concrete_patterns[2].spec = &ipv6_hdr; concrete_patterns[2].mask = NULL; concrete_patterns[2].last = NULL; concrete_patterns[3].type = RTE_FLOW_ITEM_TYPE_IPV4; concrete_patterns[3].spec = &ipv4_hdr; concrete_patterns[3].mask = NULL; concrete_patterns[3].last = NULL; concrete_patterns[4].type = RTE_FLOW_ITEM_TYPE_ICMP; concrete_patterns[4].spec = &icmp_hdr; concrete_patterns[4].mask = NULL; concrete_patterns[4].last = NULL; concrete_patterns[5].type = RTE_FLOW_ITEM_TYPE_END; concrete_patterns[5].spec = NULL; concrete_patterns[5].mask = NULL; concrete_patterns[5].last = NULL; Looking forward to your further support, and many thanks in advance. Best regards, Tao From: Asaf Penso Date: Thursday, 21. March 2024 at 20:18 To: Tao Li , users@dpdk.org Subject: Re: Finer matching granularity with async template API BTW, In the non working example I see ipv6 / ipv4 / ICMP. Was this your intention or did you mean ipv6 / ICMP? Regards, Asaf Penso ________________________________ From: Asaf Penso Sent: Thursday, March 21, 2024 9:17:04 PM To: Tao Li ; users@dpdk.org Subject: Re: Finer matching granularity with async template API Hello Tao, What is the output / error message you get? Regards, Asaf Penso ________________________________ From: Tao Li Sent: Thursday, March 21, 2024 5:44:00 PM To: users@dpdk.org Subject: Finer matching granularity with async template API Hi all, I am using async template API to install flow rules to perform actions on packets to achieve IP(v4)inIP(v6) tunnelling. Currently I am facing an issue where I cannot perform incoming traffic matching with finer granularity. The test-pmd commands in use are as following: port stop all flow configure 0 queues_number 4 queues_size 64 counters_number 0 aging_counters_number 0 meters_number 0 flags 0 # PF0 flow configure 1 queues_number 4 queues_size 64 counters_number 0 aging_counters_number 0 meters_number 0 flags 0 flow configure 2 queues_number 4 queues_size 64 counters_number 0 aging_counters_number 0 meters_number 0 flags 0 flow configure 3 queues_number 4 queues_size 64 counters_number 0 aging_counters_number 0 meters_number 0 flags 0 # PF1V0 port start all set verbose 1 flow pattern_template 0 create transfer relaxed no pattern_template_id 10 template represented_port ethdev_port_id is 0 / eth / ipv6 / ipv4 / icmp / end set raw_decap 0 eth / ipv6 / end_set set raw_encap 0 eth src is 11:22:33:44:55:66 dst is 66:9d:a7:fd:fb:43 type is 0x0800 / end_set flow actions_template 0 create transfer actions_template_id 10 template raw_decap index 0 / raw_encap index 0 / represented_port / end mask raw_decap index 0 / raw_encap index 0 / represented_port / end flow template_table 0 create group 0 priority 0 transfer wire_orig table_id 5 rules_number 8 pattern_template 10 actions_template 10 flow queue 0 create 0 template_table 5 pattern_template 0 actions_template 0 postpone no pattern represented_port ethdev_port_id is 0 / eth / ipv6 / ipv4 / icmp / end actions raw_decap index 0 / raw_encap index 0 / represented_port ethdev_port_id 3 / end flow push 0 queue 0 Once I remove matching patterns for the inner packet headers( ipv4 / icmp) as following, I can see the processed packets inside VMs using tcpdump. … flow pattern_template 0 create transfer relaxed no pattern_template_id 10 template represented_port ethdev_port_id is 0 / eth / ipv6 / end … flow queue 0 create 0 template_table 5 pattern_template 0 actions_template 0 postpone no pattern represented_port ethdev_port_id is 0 / eth / ipv6 / end actions raw_decap index 0 / raw_encap index 0 / represented_port ethdev_port_id 3 / end … Similar combination works when using the synchronous rte_flow API. Any comment or suggestion on this issue is much appreciated. Many thanks in advance. Best regards, Tao