Hello,
Issue recap from the last mail: Since DPDK 24.11.2, installing synchronous rte flow rules to match IP in IP packets (packets with pattern of eth / ipv6 / ipv4 or eth / ipv4 / ipv4) does not work anymore.
Recently I took a closer look at this issue and related PR [1], and find that the attached changes can help to enable IPinIP header matching in the synchronous mode regardless this pattern exists in the outer header or inner header.
The reasoning is as follows. Before the change proposed in [1], the condition, `l3_tunnel_detection == l3_tunnel_inner`, adds one extra flag (MLX5_FLOW_LAYER_IPIP or MLX5_FLOW_LAYER_IPV6_ENCAP) to ‘item_flags‘ if this pattern exists as part of the inner header. The added extra flag later is used by ‘mlx5_flow_validate_item_ipv6’ or ‘mlx5_flow_dv_validate_item_ipv4’ to reject IPinIP encapsulation in another tunnel. To achieve the purpose of explicitly allowing inner IPinIP matching and outer IPinIP matching as well, the more direct way is simply removing the statement of adding this ’trick’ flag for the inner header case, so that it can pass the validation check. And it also seems more reasonable to set ’tunnel’ to 1 if the l3 inner tunnel is detected.
With this change, it is possible to run the following commands without the mentioned error in the previous email:
```
sudo ./dpdk-testpmd -a 0000:3b:00.0,class=rxq_cqe_comp_en=0,rx_vec_en=1,representor=pf[0]vf[0-3] -a 0000:3b:00.1,class=rxq_cqe_comp_en=0,rx_vec_en=1 -- -i --rxq=1 --txq=1 --flow-isolate-all
flow create 0 ingress pattern eth / ipv6 proto is 0x0004 / end actions queue index 0 / end
or
flow create 0 ingress pattern eth / ipv4 / udp / vxlan / eth / ipv6 proto is 0x0004 / end actions queue index 0 / end
```
I would appreciate more expert feedback on this finding.
Thanks.
[1] https://github.com/DPDK/dpdk-stable/commit/116949c7a7b780f147613068cbbd6257e6053654
diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index 7b9e501..29c919a 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -7930,8 +7930,7 @@ flow_dv_validate(struct rte_eth_dev *dev, const struct rte_flow_attr *attr,
/*
* explicitly allow inner IPIP match
*/
- if (l3_tunnel_detection == l3_tunnel_outer) {
- item_flags |= l3_tunnel_flag;
+ if (l3_tunnel_detection == l3_tunnel_inner) {
tunnel = 1;
}
ret = mlx5_flow_dv_validate_item_ipv4(dev, items,
@@ -7957,8 +7956,7 @@ flow_dv_validate(struct rte_eth_dev *dev, const struct rte_flow_attr *attr,
/*
* explicitly allow inner IPIP match
*/
- if (l3_tunnel_detection == l3_tunnel_outer) {
- item_flags |= l3_tunnel_flag;
+ if (l3_tunnel_detection == l3_tunnel_inner) {
tunnel = 1;
}
ret = mlx5_flow_validate_item_ipv6(dev, items,
Best regards,
Tao Li
From: Li, Tao
Date: Wednesday, 28. May 2025 at 10:04
To: users
Subject: Failed to install synchronous rte flow rules to match IP in IP packets since DPDK 24.11.2
Hello All,
We are running software components that use synchronous rte flow rules to match IP in IP packets. That means, we are matching packets with pattern of eth / ipv6 / ipv4 using rte flow rules. This approach works until DPDK 24.11.2. After investigation, we discover that, this patching commit [1] breaks matching of the above described header pattern, as it seems to only consider IP in IP tunneling coexisting with VXLAN encapsulation. To reproduce the error, the following testpmd commands can be used.
```
sudo ./dpdk-testpmd -a 0000:3b:00.0,class=rxq_cqe_comp_en=0,rx_vec_en=1,representor=pf[0]vf[0-3] -a 0000:3b:00.1,class=rxq_cqe_comp_en=0,rx_vec_en=1 -- -i --rxq=1 --txq=1 --flow-isolate-all
flow create 0 ingress pattern eth / ipv6 proto is 0x0004 / end actions queue index 0 / end
or
flow create 0 ingress pattern eth / ipv4 proto is 0x0004 / end actions queue index 0 / end
```
and the following error will be emitted:
```
port_flow_complain(): Caught PMD error type 13 (specific pattern item): cause: 0x7ffc9943af78, multiple tunnel not supported: Invalid argument
```
It would be appreciated to know if it is intended behavior or negative side effect of the mentioned DPDK patch commit. Would it be possible to again support IP in IP encapsulation for the outer headers?
[1] https://github.com/DPDK/dpdk-stable/commit/116949c7a7b780f147613068cbbd6257e6053654
Best regards,
Tao Li