From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 4796AA10DA for ; Fri, 2 Aug 2019 11:18:47 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 154841C234; Fri, 2 Aug 2019 11:18:46 +0200 (CEST) Received: from git-send-mailer.rdmz.labs.mlnx (unknown [37.142.13.130]) by dpdk.org (Postfix) with ESMTP id CCEBC1C231 for ; Fri, 2 Aug 2019 11:18:44 +0200 (CEST) From: Xiaoyu Min To: Shahaf Shuler , Yongseok Koh , Viacheslav Ovsiienko Cc: dev@dpdk.org, orika@mellanox.com Date: Fri, 2 Aug 2019 17:18:23 +0800 Message-Id: <0f1b3637c93d6f923ebcd4c56cc333959a570881.1564737358.git.jackmin@mellanox.com> X-Mailer: git-send-email 2.21.0 MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Subject: [dpdk-dev] [PATCH] net/mlx5: fix VLAN inner type matching on DR/DV X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" The rte_flow_item_vlan has the inner_type, which is missing on DR/DV flow engine. By adding this support, the example testpmd commands could be: - matching all vlan traffic with id 2: testpmd> flow create 0 ingress pattern eth / vlan vid is 2 / end actions queue index 2 / end - matching all ipv4 traffic in vlan with id 2: testpmd> flow create 0 ingress pattern eth / vlan vid is 2 inner_type is 0x0800 / end actions queue index 2 / end Fixes: fc2c498ccb94 ("net/mlx5: add Direct Verbs translate items") Cc: orika@mellanox.com Signed-off-by: Xiaoyu Min --- drivers/net/mlx5/mlx5_flow_dv.c | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c index 9c0a2613d5..f786c7a2c4 100644 --- a/drivers/net/mlx5/mlx5_flow_dv.c +++ b/drivers/net/mlx5/mlx5_flow_dv.c @@ -3469,10 +3469,6 @@ flow_dv_translate_item_vlan(struct mlx5_flow *dev_flow, { const struct rte_flow_item_vlan *vlan_m = item->mask; const struct rte_flow_item_vlan *vlan_v = item->spec; - const struct rte_flow_item_vlan nic_mask = { - .tci = RTE_BE16(0x0fff), - .inner_type = RTE_BE16(0xffff), - }; void *headers_m; void *headers_v; uint16_t tci_m; @@ -3481,7 +3477,7 @@ flow_dv_translate_item_vlan(struct mlx5_flow *dev_flow, if (!vlan_v) return; if (!vlan_m) - vlan_m = &nic_mask; + vlan_m = &rte_flow_item_vlan_mask; if (inner) { headers_m = MLX5_ADDR_OF(fte_match_param, matcher, inner_headers); @@ -3507,6 +3503,10 @@ flow_dv_translate_item_vlan(struct mlx5_flow *dev_flow, MLX5_SET(fte_match_set_lyr_2_4, headers_v, first_cfi, tci_v >> 12); MLX5_SET(fte_match_set_lyr_2_4, headers_m, first_prio, tci_m >> 13); MLX5_SET(fte_match_set_lyr_2_4, headers_v, first_prio, tci_v >> 13); + MLX5_SET(fte_match_set_lyr_2_4, headers_m, ethertype, + rte_be_to_cpu_16(vlan_m->inner_type)); + MLX5_SET(fte_match_set_lyr_2_4, headers_v, ethertype, + rte_be_to_cpu_16(vlan_m->inner_type & vlan_v->inner_type)); } /** -- 2.21.0