From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id E8E3BA0352; Tue, 5 Nov 2019 09:09:11 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 9AE278F96; Tue, 5 Nov 2019 09:03:22 +0100 (CET) Received: from git-send-mailer.rdmz.labs.mlnx (unknown [37.142.13.130]) by dpdk.org (Postfix) with ESMTP id 93C531BE0C for ; Tue, 5 Nov 2019 09:03:21 +0100 (CET) From: Xiaoyu Min To: orika@mellanox.com, Matan Azrad , Shahaf Shuler , Viacheslav Ovsiienko Cc: dev@dpdk.org Date: Tue, 5 Nov 2019 10:03:09 +0200 Message-Id: X-Mailer: git-send-email 2.21.0 MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Subject: [dpdk-dev] [PATCH] net/mlx5: allow pattern start from IP X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Some applications, i.e. OVS, have rule like: [1] pattern ipv4 / end actions ... which intends to match ipv4 only on non-vlan ethernet and MLX5 NIC supports this. So PMD should accept this. Fixes: 906a2efae8da ("net/mlx5: validate flow rule item order") Signed-off-by: Xiaoyu Min --- drivers/net/mlx5/mlx5_flow.c | 28 ++++++++++------------------ 1 file changed, 10 insertions(+), 18 deletions(-) diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c index b4b08f4c6c..24ea4a72b5 100644 --- a/drivers/net/mlx5/mlx5_flow.c +++ b/drivers/net/mlx5/mlx5_flow.c @@ -1276,11 +1276,17 @@ mlx5_flow_validate_item_eth(const struct rte_flow_item *item, return rte_flow_error_set(error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ITEM, item, "multiple L2 layers not supported"); - if (tunnel && (item_flags & MLX5_FLOW_LAYER_INNER_L3)) + if ((!tunnel && (item_flags & MLX5_FLOW_LAYER_OUTER_L3)) || + (tunnel && (item_flags & MLX5_FLOW_LAYER_INNER_L3))) return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, item, - "inner L2 layer should not " - "follow inner L3 layers"); + "L2 layer should not follow " + "L3 layers"); + if ((!tunnel && (item_flags & MLX5_FLOW_LAYER_OUTER_VLAN)) || + (tunnel && (item_flags & MLX5_FLOW_LAYER_INNER_VLAN))) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, item, + "L2 layer should not follow VLAN"); if (!mask) mask = &rte_flow_item_eth_mask; ret = mlx5_flow_item_acceptable(item, (const uint8_t *)mask, @@ -1327,8 +1333,6 @@ mlx5_flow_validate_item_vlan(const struct rte_flow_item *item, const uint64_t vlanm = tunnel ? MLX5_FLOW_LAYER_INNER_VLAN : MLX5_FLOW_LAYER_OUTER_VLAN; - const uint64_t l2m = tunnel ? MLX5_FLOW_LAYER_INNER_L2 : - MLX5_FLOW_LAYER_OUTER_L2; if (item_flags & vlanm) return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, item, @@ -1336,11 +1340,7 @@ mlx5_flow_validate_item_vlan(const struct rte_flow_item *item, else if ((item_flags & l34m) != 0) return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, item, - "L2 layer cannot follow L3/L4 layer"); - else if ((item_flags & l2m) == 0) - return rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ITEM, item, - "no L2 layer before VLAN"); + "VLAN cannot follow L3/L4 layer"); if (!mask) mask = &rte_flow_item_vlan_mask; ret = mlx5_flow_item_acceptable(item, (const uint8_t *)mask, @@ -1453,10 +1453,6 @@ mlx5_flow_validate_item_ipv4(const struct rte_flow_item *item, return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, item, "L3 cannot follow an NVGRE layer."); - else if (!tunnel && !(item_flags & MLX5_FLOW_LAYER_OUTER_L2)) - return rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ITEM, item, - "no L2 layer before IPV4"); if (!mask) mask = &rte_flow_item_ipv4_mask; else if (mask->hdr.next_proto_id != 0 && @@ -1548,10 +1544,6 @@ mlx5_flow_validate_item_ipv6(const struct rte_flow_item *item, return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, item, "L3 cannot follow an NVGRE layer."); - else if (!tunnel && !(item_flags & MLX5_FLOW_LAYER_OUTER_L2)) - return rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ITEM, item, - "no L2 layer before IPV6"); if (!mask) mask = &rte_flow_item_ipv6_mask; ret = mlx5_flow_item_acceptable(item, (const uint8_t *)mask, -- 2.24.0.rc0.3.g12a4aeaad8