DPDK usage discussions
 help / color / mirror / Atom feed
* Finer matching granularity with async template API
@ 2024-03-21 15:44 Tao Li
  2024-03-21 19:17 ` Asaf Penso
  0 siblings, 1 reply; 7+ messages in thread
From: Tao Li @ 2024-03-21 15:44 UTC (permalink / raw)
  To: users

[-- Attachment #1: Type: text/plain, Size: 2605 bytes --]

Hi all,

I am using async template API to install flow rules to perform actions on packets to achieve IP(v4)inIP(v6) tunnelling. Currently I am facing an issue where I cannot perform incoming traffic matching with finer granularity. The test-pmd commands in use are as following:

<Not working test-pmd commands>
port stop all

flow configure 0 queues_number 4 queues_size 64 counters_number 0 aging_counters_number 0 meters_number 0 flags 0   # PF0

flow configure 1 queues_number 4 queues_size 64 counters_number 0 aging_counters_number 0 meters_number 0 flags 0

flow configure 2 queues_number 4 queues_size 64 counters_number 0 aging_counters_number 0 meters_number 0 flags 0

flow configure 3 queues_number 4 queues_size 64 counters_number 0 aging_counters_number 0 meters_number 0 flags 0  # PF1V0

port start all
set verbose 1

flow pattern_template 0 create transfer relaxed no pattern_template_id 10  template represented_port ethdev_port_id is 0 / eth  / ipv6 / ipv4 / icmp  / end

set raw_decap 0 eth  / ipv6 / end_set
set raw_encap 0 eth src is 11:22:33:44:55:66 dst is 66:9d:a7:fd:fb:43 type is 0x0800 / end_set

flow actions_template 0 create transfer  actions_template_id 10  template raw_decap index 0 / raw_encap index 0 / represented_port / end mask raw_decap index 0 / raw_encap index 0 /  represented_port  / end

flow template_table 0 create  group 0 priority 0  transfer wire_orig table_id 5 rules_number 8 pattern_template 10 actions_template 10

flow queue 0 create 0 template_table 5 pattern_template 0 actions_template 0 postpone no pattern represented_port ethdev_port_id is 0 / eth  / ipv6  / ipv4 / icmp  / end actions raw_decap index 0 / raw_encap index 0 /  represented_port ethdev_port_id 3 / end

flow push 0 queue 0
</Not working test-pmd commands>

Once I remove matching patterns for the inner packet headers( ipv4 / icmp) as following, I can see the processed packets inside VMs using tcpdump.

<Working test-pmd commands>
…
flow pattern_template 0 create transfer relaxed no pattern_template_id 10  template represented_port ethdev_port_id is 0 / eth  / ipv6 / end
…
flow queue 0 create 0 template_table 5 pattern_template 0 actions_template 0 postpone no pattern represented_port ethdev_port_id is 0 / eth  / ipv6   / end actions raw_decap index 0 / raw_encap index 0 /  represented_port ethdev_port_id 3 / end
…
</Working test-pmd commands>

Similar combination works when using the synchronous rte_flow API. Any comment or suggestion on this issue is much appreciated. Many thanks in advance.

Best regards,
Tao




[-- Attachment #2: Type: text/html, Size: 7926 bytes --]

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: Finer matching granularity with async template API
  2024-03-21 15:44 Finer matching granularity with async template API Tao Li
@ 2024-03-21 19:17 ` Asaf Penso
  2024-03-21 19:18   ` Asaf Penso
  0 siblings, 1 reply; 7+ messages in thread
From: Asaf Penso @ 2024-03-21 19:17 UTC (permalink / raw)
  To: Tao Li, users

[-- Attachment #1: Type: text/plain, Size: 2999 bytes --]

Hello Tao,

What is the output / error message you get?


Regards,
Asaf Penso
________________________________
From: Tao Li <byteocean@hotmail.com>
Sent: Thursday, March 21, 2024 5:44:00 PM
To: users@dpdk.org <users@dpdk.org>
Subject: Finer matching granularity with async template API


Hi all,



I am using async template API to install flow rules to perform actions on packets to achieve IP(v4)inIP(v6) tunnelling. Currently I am facing an issue where I cannot perform incoming traffic matching with finer granularity. The test-pmd commands in use are as following:



<Not working test-pmd commands>

port stop all



flow configure 0 queues_number 4 queues_size 64 counters_number 0 aging_counters_number 0 meters_number 0 flags 0   # PF0



flow configure 1 queues_number 4 queues_size 64 counters_number 0 aging_counters_number 0 meters_number 0 flags 0



flow configure 2 queues_number 4 queues_size 64 counters_number 0 aging_counters_number 0 meters_number 0 flags 0



flow configure 3 queues_number 4 queues_size 64 counters_number 0 aging_counters_number 0 meters_number 0 flags 0  # PF1V0



port start all

set verbose 1



flow pattern_template 0 create transfer relaxed no pattern_template_id 10  template represented_port ethdev_port_id is 0 / eth  / ipv6 / ipv4 / icmp  / end



set raw_decap 0 eth  / ipv6 / end_set

set raw_encap 0 eth src is 11:22:33:44:55:66 dst is 66:9d:a7:fd:fb:43 type is 0x0800 / end_set



flow actions_template 0 create transfer  actions_template_id 10  template raw_decap index 0 / raw_encap index 0 / represented_port / end mask raw_decap index 0 / raw_encap index 0 /  represented_port  / end



flow template_table 0 create  group 0 priority 0  transfer wire_orig table_id 5 rules_number 8 pattern_template 10 actions_template 10



flow queue 0 create 0 template_table 5 pattern_template 0 actions_template 0 postpone no pattern represented_port ethdev_port_id is 0 / eth  / ipv6  / ipv4 / icmp  / end actions raw_decap index 0 / raw_encap index 0 /  represented_port ethdev_port_id 3 / end



flow push 0 queue 0

</Not working test-pmd commands>



Once I remove matching patterns for the inner packet headers( ipv4 / icmp) as following, I can see the processed packets inside VMs using tcpdump.



<Working test-pmd commands>

…

flow pattern_template 0 create transfer relaxed no pattern_template_id 10  template represented_port ethdev_port_id is 0 / eth  / ipv6 / end

…

flow queue 0 create 0 template_table 5 pattern_template 0 actions_template 0 postpone no pattern represented_port ethdev_port_id is 0 / eth  / ipv6   / end actions raw_decap index 0 / raw_encap index 0 /  represented_port ethdev_port_id 3 / end

…

</Working test-pmd commands>



Similar combination works when using the synchronous rte_flow API. Any comment or suggestion on this issue is much appreciated. Many thanks in advance.



Best regards,

Tao





[-- Attachment #2: Type: text/html, Size: 7644 bytes --]

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: Finer matching granularity with async template API
  2024-03-21 19:17 ` Asaf Penso
@ 2024-03-21 19:18   ` Asaf Penso
  2024-03-22 13:19     ` Tao Li
  0 siblings, 1 reply; 7+ messages in thread
From: Asaf Penso @ 2024-03-21 19:18 UTC (permalink / raw)
  To: Tao Li, users

[-- Attachment #1: Type: text/plain, Size: 3386 bytes --]

BTW,
In the non working example I see ipv6 / ipv4 / ICMP. Was this your intention or did you mean ipv6 / ICMP?

Regards,
Asaf Penso
________________________________
From: Asaf Penso <asafp@nvidia.com>
Sent: Thursday, March 21, 2024 9:17:04 PM
To: Tao Li <byteocean@hotmail.com>; users@dpdk.org <users@dpdk.org>
Subject: Re: Finer matching granularity with async template API

Hello Tao,

What is the output / error message you get?


Regards,
Asaf Penso
________________________________
From: Tao Li <byteocean@hotmail.com>
Sent: Thursday, March 21, 2024 5:44:00 PM
To: users@dpdk.org <users@dpdk.org>
Subject: Finer matching granularity with async template API


Hi all,



I am using async template API to install flow rules to perform actions on packets to achieve IP(v4)inIP(v6) tunnelling. Currently I am facing an issue where I cannot perform incoming traffic matching with finer granularity. The test-pmd commands in use are as following:



<Not working test-pmd commands>

port stop all



flow configure 0 queues_number 4 queues_size 64 counters_number 0 aging_counters_number 0 meters_number 0 flags 0   # PF0



flow configure 1 queues_number 4 queues_size 64 counters_number 0 aging_counters_number 0 meters_number 0 flags 0



flow configure 2 queues_number 4 queues_size 64 counters_number 0 aging_counters_number 0 meters_number 0 flags 0



flow configure 3 queues_number 4 queues_size 64 counters_number 0 aging_counters_number 0 meters_number 0 flags 0  # PF1V0



port start all

set verbose 1



flow pattern_template 0 create transfer relaxed no pattern_template_id 10  template represented_port ethdev_port_id is 0 / eth  / ipv6 / ipv4 / icmp  / end



set raw_decap 0 eth  / ipv6 / end_set

set raw_encap 0 eth src is 11:22:33:44:55:66 dst is 66:9d:a7:fd:fb:43 type is 0x0800 / end_set



flow actions_template 0 create transfer  actions_template_id 10  template raw_decap index 0 / raw_encap index 0 / represented_port / end mask raw_decap index 0 / raw_encap index 0 /  represented_port  / end



flow template_table 0 create  group 0 priority 0  transfer wire_orig table_id 5 rules_number 8 pattern_template 10 actions_template 10



flow queue 0 create 0 template_table 5 pattern_template 0 actions_template 0 postpone no pattern represented_port ethdev_port_id is 0 / eth  / ipv6  / ipv4 / icmp  / end actions raw_decap index 0 / raw_encap index 0 /  represented_port ethdev_port_id 3 / end



flow push 0 queue 0

</Not working test-pmd commands>



Once I remove matching patterns for the inner packet headers( ipv4 / icmp) as following, I can see the processed packets inside VMs using tcpdump.



<Working test-pmd commands>

…

flow pattern_template 0 create transfer relaxed no pattern_template_id 10  template represented_port ethdev_port_id is 0 / eth  / ipv6 / end

…

flow queue 0 create 0 template_table 5 pattern_template 0 actions_template 0 postpone no pattern represented_port ethdev_port_id is 0 / eth  / ipv6   / end actions raw_decap index 0 / raw_encap index 0 /  represented_port ethdev_port_id 3 / end

…

</Working test-pmd commands>



Similar combination works when using the synchronous rte_flow API. Any comment or suggestion on this issue is much appreciated. Many thanks in advance.



Best regards,

Tao





[-- Attachment #2: Type: text/html, Size: 8458 bytes --]

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: Finer matching granularity with async template API
  2024-03-21 19:18   ` Asaf Penso
@ 2024-03-22 13:19     ` Tao Li
  2024-03-22 15:08       ` Tao Li
  0 siblings, 1 reply; 7+ messages in thread
From: Tao Li @ 2024-03-22 13:19 UTC (permalink / raw)
  To: Asaf Penso, users

[-- Attachment #1: Type: text/plain, Size: 8651 bytes --]

Hellp Asaf,

Thanks for your speedy reply. Please find additional information based on your questions, and I hope they would help to understand our purpose and issue.


  1.  Why ipv6/ipv4/icmp?

We are performing IPinIP tunnelling for traffic, and in this provided test-pmd example we encapsulate IPv4 packets from VMs into IPv6 underlay packets. The refence RFCs for this approach are RFC 1853 and RFC 2473. This article<https://www.h3c.com/en/Support/Resource_Center/HK/Switches/H3C_S7500E_X/S7500E-X/Technical_Documents/Configure___Deploy/Configuration_Guides/H3C_S7500E-X_CG-Release7178-6W100/05/201602/914694_294551_0.htm> also provides good visualization on packet structures for this IPinIP tunnelling approach.



  1.  What output /error message?

No crashing error message or similar happens, thus it is difficult for us to debug what is exactly going on. What is observed is that incoming packets cannot be captured and processed by this flow rule,  compared with using the flow rule only performs eth/ipv6 matching. After removing relevant commands or code that perform inner header matching for IPv4 and ICMP, packets can be successfully processed. The code snippets to programmably achieve the above described IPinIP tunnelling approach are as following:



<Code snippet to initialise pattern masks>
static const struct rte_flow_item_eth flow_item_eth_mask = {
                .hdr.ether_type = 0xffff,
};

static const struct rte_flow_item_ipv6 flow_item_ipv6_dst_mask = {
                .hdr.proto = 0xff,
};

static const struct rte_flow_item_ipv4 flow_item_ipv4_proto_mask = {
                .hdr.next_proto_id = 0xff,
};

static const struct rte_flow_item_icmp flow_item_icmp_mask = {
                .hdr.icmp_type = 0xff,
};

</Code snippet to initialise pattern masks>



<Code snippet to create pattern template>

                // pattern template

                struct rte_flow_item pattern[] = {

                                [0] = {.type = RTE_FLOW_ITEM_TYPE_REPRESENTED_PORT, .mask = &represented_port_mask},

                                [1] = {.type = RTE_FLOW_ITEM_TYPE_ETH, .mask = &flow_item_eth_mask},

                                [2] = {.type = RTE_FLOW_ITEM_TYPE_IPV6, .mask = &flow_item_ipv6_dst_mask},

                                [3] = {.type = RTE_FLOW_ITEM_TYPE_IPV4, .mask = &flow_item_ipv4_proto_mask},

                                [4] = {.type = RTE_FLOW_ITEM_TYPE_ICMP, .mask = &flow_item_icmp_mask},

                                [5] = {.type = RTE_FLOW_ITEM_TYPE_END,},

                };

                port_template_info_pf.pattern_templates[0] = create_pattern_template(main_eswitch_port, pattern);

</Code snippet to create pattern template>



<Code snippet to create patterns>

                struct rte_flow_item_eth eth_pattern = {.type = htons(0x86DD)};


                struct rte_flow_item_ipv6 ipv6_hdr = {0};

                ipv6_hdr.hdr.proto = IPPROTO_IPIP;



                struct rte_flow_item_ipv4 ipv4_hdr = {0};

                ipv4_hdr.hdr.next_proto_id = IPPROTO_ICMP;



                struct rte_flow_item_icmp icmp_hdr = {0};

                icmp_hdr.hdr.icmp_type = RTE_IP_ICMP_ECHO_REQUEST;



                struct rte_flow_item_ethdev represented_port = {.port_id = pf_port_id};



                struct rte_flow_item concrete_patterns[6];



                concrete_patterns[0].type = RTE_FLOW_ITEM_TYPE_REPRESENTED_PORT;

                concrete_patterns[0].spec = &represented_port;

                concrete_patterns[0].mask = NULL;

                concrete_patterns[0].last = NULL;





                concrete_patterns[1].type = RTE_FLOW_ITEM_TYPE_ETH;

                concrete_patterns[1].spec = &eth_pattern;

                concrete_patterns[1].mask = NULL;

                concrete_patterns[1].last = NULL;



                concrete_patterns[2].type = RTE_FLOW_ITEM_TYPE_IPV6;

                concrete_patterns[2].spec = &ipv6_hdr;

                concrete_patterns[2].mask = NULL;

                concrete_patterns[2].last = NULL;



                concrete_patterns[3].type = RTE_FLOW_ITEM_TYPE_IPV4;

                concrete_patterns[3].spec = &ipv4_hdr;

                concrete_patterns[3].mask = NULL;

                concrete_patterns[3].last = NULL;



                concrete_patterns[4].type = RTE_FLOW_ITEM_TYPE_ICMP;

                concrete_patterns[4].spec = &icmp_hdr;

                concrete_patterns[4].mask = NULL;

                concrete_patterns[4].last = NULL;



                concrete_patterns[5].type = RTE_FLOW_ITEM_TYPE_END;

                concrete_patterns[5].spec = NULL;

                concrete_patterns[5].mask = NULL;

                concrete_patterns[5].last = NULL;

</Code snippet to create patterns>




Looking forward to your further support, and many thanks in advance.

Best regards,
Tao



From: Asaf Penso <asafp@nvidia.com>
Date: Thursday, 21. March 2024 at 20:18
To: Tao Li <byteocean@hotmail.com>, users@dpdk.org <users@dpdk.org>
Subject: Re: Finer matching granularity with async template API
BTW,
In the non working example I see ipv6 / ipv4 / ICMP. Was this your intention or did you mean ipv6 / ICMP?

Regards,
Asaf Penso
________________________________
From: Asaf Penso <asafp@nvidia.com>
Sent: Thursday, March 21, 2024 9:17:04 PM
To: Tao Li <byteocean@hotmail.com>; users@dpdk.org <users@dpdk.org>
Subject: Re: Finer matching granularity with async template API

Hello Tao,

What is the output / error message you get?


Regards,
Asaf Penso
________________________________
From: Tao Li <byteocean@hotmail.com>
Sent: Thursday, March 21, 2024 5:44:00 PM
To: users@dpdk.org <users@dpdk.org>
Subject: Finer matching granularity with async template API


Hi all,



I am using async template API to install flow rules to perform actions on packets to achieve IP(v4)inIP(v6) tunnelling. Currently I am facing an issue where I cannot perform incoming traffic matching with finer granularity. The test-pmd commands in use are as following:



<Not working test-pmd commands>

port stop all



flow configure 0 queues_number 4 queues_size 64 counters_number 0 aging_counters_number 0 meters_number 0 flags 0   # PF0



flow configure 1 queues_number 4 queues_size 64 counters_number 0 aging_counters_number 0 meters_number 0 flags 0



flow configure 2 queues_number 4 queues_size 64 counters_number 0 aging_counters_number 0 meters_number 0 flags 0



flow configure 3 queues_number 4 queues_size 64 counters_number 0 aging_counters_number 0 meters_number 0 flags 0  # PF1V0



port start all

set verbose 1



flow pattern_template 0 create transfer relaxed no pattern_template_id 10  template represented_port ethdev_port_id is 0 / eth  / ipv6 / ipv4 / icmp  / end



set raw_decap 0 eth  / ipv6 / end_set

set raw_encap 0 eth src is 11:22:33:44:55:66 dst is 66:9d:a7:fd:fb:43 type is 0x0800 / end_set



flow actions_template 0 create transfer  actions_template_id 10  template raw_decap index 0 / raw_encap index 0 / represented_port / end mask raw_decap index 0 / raw_encap index 0 /  represented_port  / end



flow template_table 0 create  group 0 priority 0  transfer wire_orig table_id 5 rules_number 8 pattern_template 10 actions_template 10



flow queue 0 create 0 template_table 5 pattern_template 0 actions_template 0 postpone no pattern represented_port ethdev_port_id is 0 / eth  / ipv6  / ipv4 / icmp  / end actions raw_decap index 0 / raw_encap index 0 /  represented_port ethdev_port_id 3 / end



flow push 0 queue 0

</Not working test-pmd commands>



Once I remove matching patterns for the inner packet headers( ipv4 / icmp) as following, I can see the processed packets inside VMs using tcpdump.



<Working test-pmd commands>

…

flow pattern_template 0 create transfer relaxed no pattern_template_id 10  template represented_port ethdev_port_id is 0 / eth  / ipv6 / end

…

flow queue 0 create 0 template_table 5 pattern_template 0 actions_template 0 postpone no pattern represented_port ethdev_port_id is 0 / eth  / ipv6   / end actions raw_decap index 0 / raw_encap index 0 /  represented_port ethdev_port_id 3 / end

…

</Working test-pmd commands>



Similar combination works when using the synchronous rte_flow API. Any comment or suggestion on this issue is much appreciated. Many thanks in advance.



Best regards,

Tao





[-- Attachment #2: Type: text/html, Size: 30178 bytes --]

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: Finer matching granularity with async template API
  2024-03-22 13:19     ` Tao Li
@ 2024-03-22 15:08       ` Tao Li
  2024-03-26 19:43         ` Asaf Penso
  0 siblings, 1 reply; 7+ messages in thread
From: Tao Li @ 2024-03-22 15:08 UTC (permalink / raw)
  To: Asaf Penso, users

[-- Attachment #1: Type: text/plain, Size: 9632 bytes --]

Hello Asaf,

We generate incoming IPinIP packets by using our complex solution, but below you can find a Python script to generate such packets to serve this purpose. I hope it is helpful to reproduce this issue. Thanks again.

<Code snippet to generate IPinIP packets>
#!/usr/bin/python3
from scapy.all import *
from scapy.layers.inet import Ether, UDP, ICMP
from scapy.layers.inet6 import *

ether = Ether()
ether.src = "src mac"
ether.dst = "dst mac"
ether.type = 0x86DD

ipv6 = IPv6()
ipv6.src = "src ipv6 addr"
ipv6.dst = "dst ipv6 addr"
ipv6.nh = 4

pkt = ether / ipv6 / IP(src="192.168.129.5",dst="172.32.4.9") / ICMP(type=8)

print(pkt.show())
sendp(pkt, iface ="ens1f0np0")
< /Code snippet to generate IPinIP packets >

Cheers,
Tao

From: Tao Li <byteocean@hotmail.com>
Date: Friday, 22. March 2024 at 14:19
To: Asaf Penso <asafp@nvidia.com>, users@dpdk.org <users@dpdk.org>
Subject: Re: Finer matching granularity with async template API
Hellp Asaf,

Thanks for your speedy reply. Please find additional information based on your questions, and I hope they would help to understand our purpose and issue.


  1.  Why ipv6/ipv4/icmp?

We are performing IPinIP tunnelling for traffic, and in this provided test-pmd example we encapsulate IPv4 packets from VMs into IPv6 underlay packets. The refence RFCs for this approach are RFC 1853 and RFC 2473. This article<https://www.h3c.com/en/Support/Resource_Center/HK/Switches/H3C_S7500E_X/S7500E-X/Technical_Documents/Configure___Deploy/Configuration_Guides/H3C_S7500E-X_CG-Release7178-6W100/05/201602/914694_294551_0.htm> also provides good visualization on packet structures for this IPinIP tunnelling approach.



  1.  What output /error message?

No crashing error message or similar happens, thus it is difficult for us to debug what is exactly going on. What is observed is that incoming packets cannot be captured and processed by this flow rule,  compared with using the flow rule only performs eth/ipv6 matching. After removing relevant commands or code that perform inner header matching for IPv4 and ICMP, packets can be successfully processed. The code snippets to programmably achieve the above described IPinIP tunnelling approach are as following:



<Code snippet to initialise pattern masks>
static const struct rte_flow_item_eth flow_item_eth_mask = {
                .hdr.ether_type = 0xffff,
};

static const struct rte_flow_item_ipv6 flow_item_ipv6_dst_mask = {
                .hdr.proto = 0xff,
};

static const struct rte_flow_item_ipv4 flow_item_ipv4_proto_mask = {
                .hdr.next_proto_id = 0xff,
};

static const struct rte_flow_item_icmp flow_item_icmp_mask = {
                .hdr.icmp_type = 0xff,
};

</Code snippet to initialise pattern masks>



<Code snippet to create pattern template>

                // pattern template

                struct rte_flow_item pattern[] = {

                                [0] = {.type = RTE_FLOW_ITEM_TYPE_REPRESENTED_PORT, .mask = &represented_port_mask},

                                [1] = {.type = RTE_FLOW_ITEM_TYPE_ETH, .mask = &flow_item_eth_mask},

                                [2] = {.type = RTE_FLOW_ITEM_TYPE_IPV6, .mask = &flow_item_ipv6_dst_mask},

                                [3] = {.type = RTE_FLOW_ITEM_TYPE_IPV4, .mask = &flow_item_ipv4_proto_mask},

                                [4] = {.type = RTE_FLOW_ITEM_TYPE_ICMP, .mask = &flow_item_icmp_mask},

                                [5] = {.type = RTE_FLOW_ITEM_TYPE_END,},

                };

                port_template_info_pf.pattern_templates[0] = create_pattern_template(main_eswitch_port, pattern);

</Code snippet to create pattern template>



<Code snippet to create patterns>

                struct rte_flow_item_eth eth_pattern = {.type = htons(0x86DD)};


                struct rte_flow_item_ipv6 ipv6_hdr = {0};

                ipv6_hdr.hdr.proto = IPPROTO_IPIP;



                struct rte_flow_item_ipv4 ipv4_hdr = {0};

                ipv4_hdr.hdr.next_proto_id = IPPROTO_ICMP;



                struct rte_flow_item_icmp icmp_hdr = {0};

                icmp_hdr.hdr.icmp_type = RTE_IP_ICMP_ECHO_REQUEST;



                struct rte_flow_item_ethdev represented_port = {.port_id = pf_port_id};



                struct rte_flow_item concrete_patterns[6];



                concrete_patterns[0].type = RTE_FLOW_ITEM_TYPE_REPRESENTED_PORT;

                concrete_patterns[0].spec = &represented_port;

                concrete_patterns[0].mask = NULL;

                concrete_patterns[0].last = NULL;





                concrete_patterns[1].type = RTE_FLOW_ITEM_TYPE_ETH;

                concrete_patterns[1].spec = &eth_pattern;

                concrete_patterns[1].mask = NULL;

                concrete_patterns[1].last = NULL;



                concrete_patterns[2].type = RTE_FLOW_ITEM_TYPE_IPV6;

                concrete_patterns[2].spec = &ipv6_hdr;

                concrete_patterns[2].mask = NULL;

                concrete_patterns[2].last = NULL;



                concrete_patterns[3].type = RTE_FLOW_ITEM_TYPE_IPV4;

                concrete_patterns[3].spec = &ipv4_hdr;

                concrete_patterns[3].mask = NULL;

                concrete_patterns[3].last = NULL;



                concrete_patterns[4].type = RTE_FLOW_ITEM_TYPE_ICMP;

                concrete_patterns[4].spec = &icmp_hdr;

                concrete_patterns[4].mask = NULL;

                concrete_patterns[4].last = NULL;



                concrete_patterns[5].type = RTE_FLOW_ITEM_TYPE_END;

                concrete_patterns[5].spec = NULL;

                concrete_patterns[5].mask = NULL;

                concrete_patterns[5].last = NULL;

</Code snippet to create patterns>




Looking forward to your further support, and many thanks in advance.

Best regards,
Tao



From: Asaf Penso <asafp@nvidia.com>
Date: Thursday, 21. March 2024 at 20:18
To: Tao Li <byteocean@hotmail.com>, users@dpdk.org <users@dpdk.org>
Subject: Re: Finer matching granularity with async template API
BTW,
In the non working example I see ipv6 / ipv4 / ICMP. Was this your intention or did you mean ipv6 / ICMP?

Regards,
Asaf Penso
________________________________
From: Asaf Penso <asafp@nvidia.com>
Sent: Thursday, March 21, 2024 9:17:04 PM
To: Tao Li <byteocean@hotmail.com>; users@dpdk.org <users@dpdk.org>
Subject: Re: Finer matching granularity with async template API

Hello Tao,

What is the output / error message you get?


Regards,
Asaf Penso
________________________________
From: Tao Li <byteocean@hotmail.com>
Sent: Thursday, March 21, 2024 5:44:00 PM
To: users@dpdk.org <users@dpdk.org>
Subject: Finer matching granularity with async template API


Hi all,



I am using async template API to install flow rules to perform actions on packets to achieve IP(v4)inIP(v6) tunnelling. Currently I am facing an issue where I cannot perform incoming traffic matching with finer granularity. The test-pmd commands in use are as following:



<Not working test-pmd commands>

port stop all



flow configure 0 queues_number 4 queues_size 64 counters_number 0 aging_counters_number 0 meters_number 0 flags 0   # PF0



flow configure 1 queues_number 4 queues_size 64 counters_number 0 aging_counters_number 0 meters_number 0 flags 0



flow configure 2 queues_number 4 queues_size 64 counters_number 0 aging_counters_number 0 meters_number 0 flags 0



flow configure 3 queues_number 4 queues_size 64 counters_number 0 aging_counters_number 0 meters_number 0 flags 0  # PF1V0



port start all

set verbose 1



flow pattern_template 0 create transfer relaxed no pattern_template_id 10  template represented_port ethdev_port_id is 0 / eth  / ipv6 / ipv4 / icmp  / end



set raw_decap 0 eth  / ipv6 / end_set

set raw_encap 0 eth src is 11:22:33:44:55:66 dst is 66:9d:a7:fd:fb:43 type is 0x0800 / end_set



flow actions_template 0 create transfer  actions_template_id 10  template raw_decap index 0 / raw_encap index 0 / represented_port / end mask raw_decap index 0 / raw_encap index 0 /  represented_port  / end



flow template_table 0 create  group 0 priority 0  transfer wire_orig table_id 5 rules_number 8 pattern_template 10 actions_template 10



flow queue 0 create 0 template_table 5 pattern_template 0 actions_template 0 postpone no pattern represented_port ethdev_port_id is 0 / eth  / ipv6  / ipv4 / icmp  / end actions raw_decap index 0 / raw_encap index 0 /  represented_port ethdev_port_id 3 / end



flow push 0 queue 0

</Not working test-pmd commands>



Once I remove matching patterns for the inner packet headers( ipv4 / icmp) as following, I can see the processed packets inside VMs using tcpdump.



<Working test-pmd commands>

…

flow pattern_template 0 create transfer relaxed no pattern_template_id 10  template represented_port ethdev_port_id is 0 / eth  / ipv6 / end

…

flow queue 0 create 0 template_table 5 pattern_template 0 actions_template 0 postpone no pattern represented_port ethdev_port_id is 0 / eth  / ipv6   / end actions raw_decap index 0 / raw_encap index 0 /  represented_port ethdev_port_id 3 / end

…

</Working test-pmd commands>



Similar combination works when using the synchronous rte_flow API. Any comment or suggestion on this issue is much appreciated. Many thanks in advance.



Best regards,

Tao





[-- Attachment #2: Type: text/html, Size: 38461 bytes --]

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: Finer matching granularity with async template API
  2024-03-22 15:08       ` Tao Li
@ 2024-03-26 19:43         ` Asaf Penso
  2024-04-03 10:06           ` Tao Li
  0 siblings, 1 reply; 7+ messages in thread
From: Asaf Penso @ 2024-03-26 19:43 UTC (permalink / raw)
  To: Tao Li, users

[-- Attachment #1: Type: text/plain, Size: 10048 bytes --]

Hello Tao,

Currently, we don't support IPinIP with template API.
We have it in our roadmap, but still no concrete release date for it.

Regards,
Asaf Penso

________________________________
From: Tao Li <byteocean@hotmail.com>
Sent: Friday, March 22, 2024 5:08:46 pm
To: Asaf Penso <asafp@nvidia.com>; users@dpdk.org <users@dpdk.org>
Subject: Re: Finer matching granularity with async template API

Hello Asaf,

We generate incoming IPinIP packets by using our complex solution, but below you can find a Python script to generate such packets to serve this purpose. I hope it is helpful to reproduce this issue. Thanks again.

<Code snippet to generate IPinIP packets>
#!/usr/bin/python3
from scapy.all import *
from scapy.layers.inet import Ether, UDP, ICMP
from scapy.layers.inet6 import *

ether = Ether()
ether.src = "src mac"
ether.dst = "dst mac"
ether.type = 0x86DD

ipv6 = IPv6()
ipv6.src = "src ipv6 addr"
ipv6.dst = "dst ipv6 addr"
ipv6.nh = 4

pkt = ether / ipv6 / IP(src="192.168.129.5",dst="172.32.4.9") / ICMP(type=8)

print(pkt.show())
sendp(pkt, iface ="ens1f0np0")
< /Code snippet to generate IPinIP packets >

Cheers,
Tao

From: Tao Li <byteocean@hotmail.com>
Date: Friday, 22. March 2024 at 14:19
To: Asaf Penso <asafp@nvidia.com>, users@dpdk.org <users@dpdk.org>
Subject: Re: Finer matching granularity with async template API
Hellp Asaf,

Thanks for your speedy reply. Please find additional information based on your questions, and I hope they would help to understand our purpose and issue.


  1.  Why ipv6/ipv4/icmp?

We are performing IPinIP tunnelling for traffic, and in this provided test-pmd example we encapsulate IPv4 packets from VMs into IPv6 underlay packets. The refence RFCs for this approach are RFC 1853 and RFC 2473. This article<https://www.h3c.com/en/Support/Resource_Center/HK/Switches/H3C_S7500E_X/S7500E-X/Technical_Documents/Configure___Deploy/Configuration_Guides/H3C_S7500E-X_CG-Release7178-6W100/05/201602/914694_294551_0.htm> also provides good visualization on packet structures for this IPinIP tunnelling approach.



  1.  What output /error message?

No crashing error message or similar happens, thus it is difficult for us to debug what is exactly going on. What is observed is that incoming packets cannot be captured and processed by this flow rule,  compared with using the flow rule only performs eth/ipv6 matching. After removing relevant commands or code that perform inner header matching for IPv4 and ICMP, packets can be successfully processed. The code snippets to programmably achieve the above described IPinIP tunnelling approach are as following:



<Code snippet to initialise pattern masks>
static const struct rte_flow_item_eth flow_item_eth_mask = {
                .hdr.ether_type = 0xffff,
};

static const struct rte_flow_item_ipv6 flow_item_ipv6_dst_mask = {
                .hdr.proto = 0xff,
};

static const struct rte_flow_item_ipv4 flow_item_ipv4_proto_mask = {
                .hdr.next_proto_id = 0xff,
};

static const struct rte_flow_item_icmp flow_item_icmp_mask = {
                .hdr.icmp_type = 0xff,
};

</Code snippet to initialise pattern masks>



<Code snippet to create pattern template>

                // pattern template

                struct rte_flow_item pattern[] = {

                                [0] = {.type = RTE_FLOW_ITEM_TYPE_REPRESENTED_PORT, .mask = &represented_port_mask},

                                [1] = {.type = RTE_FLOW_ITEM_TYPE_ETH, .mask = &flow_item_eth_mask},

                                [2] = {.type = RTE_FLOW_ITEM_TYPE_IPV6, .mask = &flow_item_ipv6_dst_mask},

                                [3] = {.type = RTE_FLOW_ITEM_TYPE_IPV4, .mask = &flow_item_ipv4_proto_mask},

                                [4] = {.type = RTE_FLOW_ITEM_TYPE_ICMP, .mask = &flow_item_icmp_mask},

                                [5] = {.type = RTE_FLOW_ITEM_TYPE_END,},

                };

                port_template_info_pf.pattern_templates[0] = create_pattern_template(main_eswitch_port, pattern);

</Code snippet to create pattern template>



<Code snippet to create patterns>

                struct rte_flow_item_eth eth_pattern = {.type = htons(0x86DD)};


                struct rte_flow_item_ipv6 ipv6_hdr = {0};

                ipv6_hdr.hdr.proto = IPPROTO_IPIP;



                struct rte_flow_item_ipv4 ipv4_hdr = {0};

                ipv4_hdr.hdr.next_proto_id = IPPROTO_ICMP;



                struct rte_flow_item_icmp icmp_hdr = {0};

                icmp_hdr.hdr.icmp_type = RTE_IP_ICMP_ECHO_REQUEST;



                struct rte_flow_item_ethdev represented_port = {.port_id = pf_port_id};



                struct rte_flow_item concrete_patterns[6];



                concrete_patterns[0].type = RTE_FLOW_ITEM_TYPE_REPRESENTED_PORT;

                concrete_patterns[0].spec = &represented_port;

                concrete_patterns[0].mask = NULL;

                concrete_patterns[0].last = NULL;





                concrete_patterns[1].type = RTE_FLOW_ITEM_TYPE_ETH;

                concrete_patterns[1].spec = &eth_pattern;

                concrete_patterns[1].mask = NULL;

                concrete_patterns[1].last = NULL;



                concrete_patterns[2].type = RTE_FLOW_ITEM_TYPE_IPV6;

                concrete_patterns[2].spec = &ipv6_hdr;

                concrete_patterns[2].mask = NULL;

                concrete_patterns[2].last = NULL;



                concrete_patterns[3].type = RTE_FLOW_ITEM_TYPE_IPV4;

                concrete_patterns[3].spec = &ipv4_hdr;

                concrete_patterns[3].mask = NULL;

                concrete_patterns[3].last = NULL;



                concrete_patterns[4].type = RTE_FLOW_ITEM_TYPE_ICMP;

                concrete_patterns[4].spec = &icmp_hdr;

                concrete_patterns[4].mask = NULL;

                concrete_patterns[4].last = NULL;



                concrete_patterns[5].type = RTE_FLOW_ITEM_TYPE_END;

                concrete_patterns[5].spec = NULL;

                concrete_patterns[5].mask = NULL;

                concrete_patterns[5].last = NULL;

</Code snippet to create patterns>




Looking forward to your further support, and many thanks in advance.

Best regards,
Tao



From: Asaf Penso <asafp@nvidia.com>
Date: Thursday, 21. March 2024 at 20:18
To: Tao Li <byteocean@hotmail.com>, users@dpdk.org <users@dpdk.org>
Subject: Re: Finer matching granularity with async template API
BTW,
In the non working example I see ipv6 / ipv4 / ICMP. Was this your intention or did you mean ipv6 / ICMP?

Regards,
Asaf Penso
________________________________
From: Asaf Penso <asafp@nvidia.com>
Sent: Thursday, March 21, 2024 9:17:04 PM
To: Tao Li <byteocean@hotmail.com>; users@dpdk.org <users@dpdk.org>
Subject: Re: Finer matching granularity with async template API

Hello Tao,

What is the output / error message you get?


Regards,
Asaf Penso
________________________________
From: Tao Li <byteocean@hotmail.com>
Sent: Thursday, March 21, 2024 5:44:00 PM
To: users@dpdk.org <users@dpdk.org>
Subject: Finer matching granularity with async template API


Hi all,



I am using async template API to install flow rules to perform actions on packets to achieve IP(v4)inIP(v6) tunnelling. Currently I am facing an issue where I cannot perform incoming traffic matching with finer granularity. The test-pmd commands in use are as following:



<Not working test-pmd commands>

port stop all



flow configure 0 queues_number 4 queues_size 64 counters_number 0 aging_counters_number 0 meters_number 0 flags 0   # PF0



flow configure 1 queues_number 4 queues_size 64 counters_number 0 aging_counters_number 0 meters_number 0 flags 0



flow configure 2 queues_number 4 queues_size 64 counters_number 0 aging_counters_number 0 meters_number 0 flags 0



flow configure 3 queues_number 4 queues_size 64 counters_number 0 aging_counters_number 0 meters_number 0 flags 0  # PF1V0



port start all

set verbose 1



flow pattern_template 0 create transfer relaxed no pattern_template_id 10  template represented_port ethdev_port_id is 0 / eth  / ipv6 / ipv4 / icmp  / end



set raw_decap 0 eth  / ipv6 / end_set

set raw_encap 0 eth src is 11:22:33:44:55:66 dst is 66:9d:a7:fd:fb:43 type is 0x0800 / end_set



flow actions_template 0 create transfer  actions_template_id 10  template raw_decap index 0 / raw_encap index 0 / represented_port / end mask raw_decap index 0 / raw_encap index 0 /  represented_port  / end



flow template_table 0 create  group 0 priority 0  transfer wire_orig table_id 5 rules_number 8 pattern_template 10 actions_template 10



flow queue 0 create 0 template_table 5 pattern_template 0 actions_template 0 postpone no pattern represented_port ethdev_port_id is 0 / eth  / ipv6  / ipv4 / icmp  / end actions raw_decap index 0 / raw_encap index 0 /  represented_port ethdev_port_id 3 / end



flow push 0 queue 0

</Not working test-pmd commands>



Once I remove matching patterns for the inner packet headers( ipv4 / icmp) as following, I can see the processed packets inside VMs using tcpdump.



<Working test-pmd commands>

…

flow pattern_template 0 create transfer relaxed no pattern_template_id 10  template represented_port ethdev_port_id is 0 / eth  / ipv6 / end

…

flow queue 0 create 0 template_table 5 pattern_template 0 actions_template 0 postpone no pattern represented_port ethdev_port_id is 0 / eth  / ipv6   / end actions raw_decap index 0 / raw_encap index 0 /  represented_port ethdev_port_id 3 / end

…

</Working test-pmd commands>



Similar combination works when using the synchronous rte_flow API. Any comment or suggestion on this issue is much appreciated. Many thanks in advance.



Best regards,

Tao






[-- Attachment #2: Type: text/html, Size: 49415 bytes --]

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: Finer matching granularity with async template API
  2024-03-26 19:43         ` Asaf Penso
@ 2024-04-03 10:06           ` Tao Li
  0 siblings, 0 replies; 7+ messages in thread
From: Tao Li @ 2024-04-03 10:06 UTC (permalink / raw)
  To: Asaf Penso, users

[-- Attachment #1: Type: text/plain, Size: 10374 bytes --]

Hello Asaf,

Thanks for sharing this info, and looking forward to support this feature.

Best regards,
Tao

From: Asaf Penso <asafp@nvidia.com>
Date: Tuesday, 26. March 2024 at 20:43
To: Tao Li <byteocean@hotmail.com>, users@dpdk.org <users@dpdk.org>
Subject: Re: Finer matching granularity with async template API
Hello Tao,

Currently, we don't support IPinIP with template API.
We have it in our roadmap, but still no concrete release date for it.

Regards,
Asaf Penso

________________________________
From: Tao Li <byteocean@hotmail.com>
Sent: Friday, March 22, 2024 5:08:46 pm
To: Asaf Penso <asafp@nvidia.com>; users@dpdk.org <users@dpdk.org>
Subject: Re: Finer matching granularity with async template API

Hello Asaf,

We generate incoming IPinIP packets by using our complex solution, but below you can find a Python script to generate such packets to serve this purpose. I hope it is helpful to reproduce this issue. Thanks again.

<Code snippet to generate IPinIP packets>
#!/usr/bin/python3
from scapy.all import *
from scapy.layers.inet import Ether, UDP, ICMP
from scapy.layers.inet6 import *

ether = Ether()
ether.src = "src mac"
ether.dst = "dst mac"
ether.type = 0x86DD

ipv6 = IPv6()
ipv6.src = "src ipv6 addr"
ipv6.dst = "dst ipv6 addr"
ipv6.nh = 4

pkt = ether / ipv6 / IP(src="192.168.129.5",dst="172.32.4.9") / ICMP(type=8)

print(pkt.show())
sendp(pkt, iface ="ens1f0np0")
< /Code snippet to generate IPinIP packets >

Cheers,
Tao

From: Tao Li <byteocean@hotmail.com>
Date: Friday, 22. March 2024 at 14:19
To: Asaf Penso <asafp@nvidia.com>, users@dpdk.org <users@dpdk.org>
Subject: Re: Finer matching granularity with async template API
Hellp Asaf,

Thanks for your speedy reply. Please find additional information based on your questions, and I hope they would help to understand our purpose and issue.


  1.  Why ipv6/ipv4/icmp?

We are performing IPinIP tunnelling for traffic, and in this provided test-pmd example we encapsulate IPv4 packets from VMs into IPv6 underlay packets. The refence RFCs for this approach are RFC 1853 and RFC 2473. This article<https://www.h3c.com/en/Support/Resource_Center/HK/Switches/H3C_S7500E_X/S7500E-X/Technical_Documents/Configure___Deploy/Configuration_Guides/H3C_S7500E-X_CG-Release7178-6W100/05/201602/914694_294551_0.htm> also provides good visualization on packet structures for this IPinIP tunnelling approach.



  1.  What output /error message?

No crashing error message or similar happens, thus it is difficult for us to debug what is exactly going on. What is observed is that incoming packets cannot be captured and processed by this flow rule,  compared with using the flow rule only performs eth/ipv6 matching. After removing relevant commands or code that perform inner header matching for IPv4 and ICMP, packets can be successfully processed. The code snippets to programmably achieve the above described IPinIP tunnelling approach are as following:



<Code snippet to initialise pattern masks>
static const struct rte_flow_item_eth flow_item_eth_mask = {
                .hdr.ether_type = 0xffff,
};

static const struct rte_flow_item_ipv6 flow_item_ipv6_dst_mask = {
                .hdr.proto = 0xff,
};

static const struct rte_flow_item_ipv4 flow_item_ipv4_proto_mask = {
                .hdr.next_proto_id = 0xff,
};

static const struct rte_flow_item_icmp flow_item_icmp_mask = {
                .hdr.icmp_type = 0xff,
};

</Code snippet to initialise pattern masks>



<Code snippet to create pattern template>

                // pattern template

                struct rte_flow_item pattern[] = {

                                [0] = {.type = RTE_FLOW_ITEM_TYPE_REPRESENTED_PORT, .mask = &represented_port_mask},

                                [1] = {.type = RTE_FLOW_ITEM_TYPE_ETH, .mask = &flow_item_eth_mask},

                                [2] = {.type = RTE_FLOW_ITEM_TYPE_IPV6, .mask = &flow_item_ipv6_dst_mask},

                                [3] = {.type = RTE_FLOW_ITEM_TYPE_IPV4, .mask = &flow_item_ipv4_proto_mask},

                                [4] = {.type = RTE_FLOW_ITEM_TYPE_ICMP, .mask = &flow_item_icmp_mask},

                                [5] = {.type = RTE_FLOW_ITEM_TYPE_END,},

                };

                port_template_info_pf.pattern_templates[0] = create_pattern_template(main_eswitch_port, pattern);

</Code snippet to create pattern template>



<Code snippet to create patterns>

                struct rte_flow_item_eth eth_pattern = {.type = htons(0x86DD)};


                struct rte_flow_item_ipv6 ipv6_hdr = {0};

                ipv6_hdr.hdr.proto = IPPROTO_IPIP;



                struct rte_flow_item_ipv4 ipv4_hdr = {0};

                ipv4_hdr.hdr.next_proto_id = IPPROTO_ICMP;



                struct rte_flow_item_icmp icmp_hdr = {0};

                icmp_hdr.hdr.icmp_type = RTE_IP_ICMP_ECHO_REQUEST;



                struct rte_flow_item_ethdev represented_port = {.port_id = pf_port_id};



                struct rte_flow_item concrete_patterns[6];



                concrete_patterns[0].type = RTE_FLOW_ITEM_TYPE_REPRESENTED_PORT;

                concrete_patterns[0].spec = &represented_port;

                concrete_patterns[0].mask = NULL;

                concrete_patterns[0].last = NULL;





                concrete_patterns[1].type = RTE_FLOW_ITEM_TYPE_ETH;

                concrete_patterns[1].spec = &eth_pattern;

                concrete_patterns[1].mask = NULL;

                concrete_patterns[1].last = NULL;



                concrete_patterns[2].type = RTE_FLOW_ITEM_TYPE_IPV6;

                concrete_patterns[2].spec = &ipv6_hdr;

                concrete_patterns[2].mask = NULL;

                concrete_patterns[2].last = NULL;



                concrete_patterns[3].type = RTE_FLOW_ITEM_TYPE_IPV4;

                concrete_patterns[3].spec = &ipv4_hdr;

                concrete_patterns[3].mask = NULL;

                concrete_patterns[3].last = NULL;



                concrete_patterns[4].type = RTE_FLOW_ITEM_TYPE_ICMP;

                concrete_patterns[4].spec = &icmp_hdr;

                concrete_patterns[4].mask = NULL;

                concrete_patterns[4].last = NULL;



                concrete_patterns[5].type = RTE_FLOW_ITEM_TYPE_END;

                concrete_patterns[5].spec = NULL;

                concrete_patterns[5].mask = NULL;

                concrete_patterns[5].last = NULL;

</Code snippet to create patterns>




Looking forward to your further support, and many thanks in advance.

Best regards,
Tao



From: Asaf Penso <asafp@nvidia.com>
Date: Thursday, 21. March 2024 at 20:18
To: Tao Li <byteocean@hotmail.com>, users@dpdk.org <users@dpdk.org>
Subject: Re: Finer matching granularity with async template API
BTW,
In the non working example I see ipv6 / ipv4 / ICMP. Was this your intention or did you mean ipv6 / ICMP?

Regards,
Asaf Penso
________________________________
From: Asaf Penso <asafp@nvidia.com>
Sent: Thursday, March 21, 2024 9:17:04 PM
To: Tao Li <byteocean@hotmail.com>; users@dpdk.org <users@dpdk.org>
Subject: Re: Finer matching granularity with async template API

Hello Tao,

What is the output / error message you get?


Regards,
Asaf Penso
________________________________
From: Tao Li <byteocean@hotmail.com>
Sent: Thursday, March 21, 2024 5:44:00 PM
To: users@dpdk.org <users@dpdk.org>
Subject: Finer matching granularity with async template API


Hi all,



I am using async template API to install flow rules to perform actions on packets to achieve IP(v4)inIP(v6) tunnelling. Currently I am facing an issue where I cannot perform incoming traffic matching with finer granularity. The test-pmd commands in use are as following:



<Not working test-pmd commands>

port stop all



flow configure 0 queues_number 4 queues_size 64 counters_number 0 aging_counters_number 0 meters_number 0 flags 0   # PF0



flow configure 1 queues_number 4 queues_size 64 counters_number 0 aging_counters_number 0 meters_number 0 flags 0



flow configure 2 queues_number 4 queues_size 64 counters_number 0 aging_counters_number 0 meters_number 0 flags 0



flow configure 3 queues_number 4 queues_size 64 counters_number 0 aging_counters_number 0 meters_number 0 flags 0  # PF1V0



port start all

set verbose 1



flow pattern_template 0 create transfer relaxed no pattern_template_id 10  template represented_port ethdev_port_id is 0 / eth  / ipv6 / ipv4 / icmp  / end



set raw_decap 0 eth  / ipv6 / end_set

set raw_encap 0 eth src is 11:22:33:44:55:66 dst is 66:9d:a7:fd:fb:43 type is 0x0800 / end_set



flow actions_template 0 create transfer  actions_template_id 10  template raw_decap index 0 / raw_encap index 0 / represented_port / end mask raw_decap index 0 / raw_encap index 0 /  represented_port  / end



flow template_table 0 create  group 0 priority 0  transfer wire_orig table_id 5 rules_number 8 pattern_template 10 actions_template 10



flow queue 0 create 0 template_table 5 pattern_template 0 actions_template 0 postpone no pattern represented_port ethdev_port_id is 0 / eth  / ipv6  / ipv4 / icmp  / end actions raw_decap index 0 / raw_encap index 0 /  represented_port ethdev_port_id 3 / end



flow push 0 queue 0

</Not working test-pmd commands>



Once I remove matching patterns for the inner packet headers( ipv4 / icmp) as following, I can see the processed packets inside VMs using tcpdump.



<Working test-pmd commands>

…

flow pattern_template 0 create transfer relaxed no pattern_template_id 10  template represented_port ethdev_port_id is 0 / eth  / ipv6 / end

…

flow queue 0 create 0 template_table 5 pattern_template 0 actions_template 0 postpone no pattern represented_port ethdev_port_id is 0 / eth  / ipv6   / end actions raw_decap index 0 / raw_encap index 0 /  represented_port ethdev_port_id 3 / end

…

</Working test-pmd commands>



Similar combination works when using the synchronous rte_flow API. Any comment or suggestion on this issue is much appreciated. Many thanks in advance.



Best regards,

Tao






[-- Attachment #2: Type: text/html, Size: 49953 bytes --]

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2024-04-03 10:06 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-03-21 15:44 Finer matching granularity with async template API Tao Li
2024-03-21 19:17 ` Asaf Penso
2024-03-21 19:18   ` Asaf Penso
2024-03-22 13:19     ` Tao Li
2024-03-22 15:08       ` Tao Li
2024-03-26 19:43         ` Asaf Penso
2024-04-03 10:06           ` Tao Li

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).