From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 79FF5A0553; Thu, 20 Oct 2022 17:44:40 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 2E7E842C2E; Thu, 20 Oct 2022 17:43:05 +0200 (CEST) Received: from NAM11-DM6-obe.outbound.protection.outlook.com (mail-dm6nam11on2073.outbound.protection.outlook.com [40.107.223.73]) by mails.dpdk.org (Postfix) with ESMTP id 676D442C28 for ; Thu, 20 Oct 2022 17:43:01 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=XWmrEsDCkD9jiq9nLFdcBJJjvoYMFKXErXJMXkZx4dOLEYcyxMYNSY/rpP1olnoQ0lqzo+Ujm2CQyJgxMNIe/FnywiT9SZflEl034lPZKLTp8Bgv8amppjP7BdDvMNHWMKjZMSSg8ddRTjkMSTx7u4rHLFHGVRz4nlz9wjCr1dobv8d88k3BapIuVc0RIPPailn17evbZCnWVtVg26CQ7FeOjdv6IKvEZdLGWo7xFrGZnXX1pnWHRbamo7dPOWFq4/09gl7j+vWWdpA763FWXmeiIPxYmtyIMx7vGYW5WZv3zOy23Kvql/PJN1BHwHb+lKtxlaIbrJ491CsJSJQ8Rw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=IrxUeCyOdsaX8CTRWMlilGXk6q12UH5gpQo8uMig8Qw=; b=Af3J09P/vKYKBevRZQxm1audZTIGG+6ZH4X8SkX1sTGSdKBI0mG74AQ4s72m/YrRonEhN1J2+NOiPSd+A+rYDTBXWaLqBOPm/EMJCwZMDA5OSYOXsN1Rv2vycP3kAG6Lh6/yYUO0TJw9Yv0wDL9Gt09sNNJ7NVdU/4NUk40f5s+OC0MAqpAe+stjcanhXd/sbDeqBM5Wo1IhJChWbVzVYAqS4kOOtwgpW9IsNd3zkKwbGnEG9DNes7CofGFyVmwcMxcQu5oXbW2AiivC4TllVrKOcDPXQfD0x9GZXsHFIO34jFxtDVXKYNwzbqYbTZ79yTJSPU3ASNiK+JrK7ETa+Q== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=none (sender ip is 216.228.117.161) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=fail (p=reject sp=reject pct=100) action=oreject header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=IrxUeCyOdsaX8CTRWMlilGXk6q12UH5gpQo8uMig8Qw=; b=MGOVxnzLd/gMrg+jrH5gcOSbGMJNebaT/xjad2Y+JqXS1R/jmvNS43cx6Quq5q9OYuuyBNLDdxfL4dnbN6iTvHfY+L+l1EdkVcy3S0ZJ33hLGtOxPGaVj9+Z7JieuQri1vJOXToEA7PukOHFLVk5RqkPNHxpj9eMJYb3ykdyzIrxS1EMcaShhtSIqO8d1Qx6A+YIJpSZbBL4u90c5Vx4tKZ+DjsKpHy95qLM5CCfWFgjExsBUnCsrCKKHvW7NElAsvojFcJj+q1wAWygWjm/0wGS7R2cRlZKrfgw2lI6ym+tZ4X9L9qgIve0BvOKqzuxbs6+QUTCaD2Y4VOY9niKwA== Received: from BN9PR03CA0504.namprd03.prod.outlook.com (2603:10b6:408:130::29) by DS7PR12MB5982.namprd12.prod.outlook.com (2603:10b6:8:7d::12) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5723.35; Thu, 20 Oct 2022 15:42:59 +0000 Received: from BL02EPF0000C402.namprd05.prod.outlook.com (2603:10b6:408:130:cafe::c8) by BN9PR03CA0504.outlook.office365.com (2603:10b6:408:130::29) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5723.35 via Frontend Transport; Thu, 20 Oct 2022 15:42:59 +0000 X-MS-Exchange-Authentication-Results: spf=none (sender IP is 216.228.117.161) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=fail action=oreject header.from=nvidia.com; Received-SPF: None (protection.outlook.com: nvidia.com does not designate permitted sender hosts) Received: from mail.nvidia.com (216.228.117.161) by BL02EPF0000C402.mail.protection.outlook.com (10.167.241.4) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5723.20 via Frontend Transport; Thu, 20 Oct 2022 15:42:59 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.26; Thu, 20 Oct 2022 08:42:41 -0700 Received: from nvidia.com (10.126.231.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.29; Thu, 20 Oct 2022 08:42:39 -0700 From: Suanming Mou To: Matan Azrad , Viacheslav Ovsiienko CC: , , , Gregory Etelson Subject: [PATCH v6 15/18] net/mlx5: support flow integrity in HWS group 0 Date: Thu, 20 Oct 2022 18:41:49 +0300 Message-ID: <20221020154152.28228-16-suanmingm@nvidia.com> X-Mailer: git-send-email 2.18.1 In-Reply-To: <20221020154152.28228-1-suanmingm@nvidia.com> References: <20220923144334.27736-1-suanmingm@nvidia.com> <20221020154152.28228-1-suanmingm@nvidia.com> MIME-Version: 1.0 Content-Type: text/plain X-Originating-IP: [10.126.231.35] X-ClientProxiedBy: rnnvmail201.nvidia.com (10.129.68.8) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: BL02EPF0000C402:EE_|DS7PR12MB5982:EE_ X-MS-Office365-Filtering-Correlation-Id: 9d20ac05-6cd0-41d2-119b-08dab2b1c41b X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: dqFq+gIJNz5QeZu/sWPkKQOv/XFFHU7jGFvdNPTxhgq9Jf05xuXjoM+oYx0NSzMXu7D8XLq0IOFYdLAEMiidi8hWDwZ9dbA4DYPBTFEv5TQLntOs+n7a3W0zlH9oQEm84ozjYEjhSM1sr90wtX/gM+p63PO5cSLLAJkWRTEh6Tkf4zzKG0H0iP/FM92BgVoB1m0df8xObMWhfDr5A4RHzToNoS9IW7Z64IpXHBf2ZVy57fCsyDvyCohKiBGZo7PpB8zrXtG5JyDVrXSY4nZGhGe12IUwJ4nYmY/YXvYrWJ6slP2uvE/cKK9E+eeFzrorEU6Ic2zW3Kh7KfR4Ssg7EesGnrzpGKe+4cRrcYnTVeeQg9hqG8+I/R1QrM0fdGCiQrtwMjgZGAsL0Ixb4CXXZmddKuxlHOYbjjMXK2p5hKxWYriaXXsLQFuBMBOP9/TRSQZXmW7q2uQV0dkblJQbSRTksaMJMSqUtxKVvuYiAKrykecBm8dV4d6JYpwhDRqAlHGWNfHF5rDKIEnWVI5+bg+U6Z0G45vKc+pBgb2hY+vGVZglJ0Ww55Y4npC58wH8M1rDk5ZXoz8Hg9o0IcQmY7neQbSCZyr9ATRbIx6UxcrVEzrVSozl2AtQQhSD8Y7yAfI3AQnEB1UFol+0jyCH49SmoSN4TtlJcF0CzgIbpM/utyNO02aH39xIDBToeDsFVLfdMmC1y1RgpxhMOMQ1VNUUnKVh+OYYxSrnCuf8FvRy7aMduB7gDqnrXTW3mWBuvMZ+ocyHDMgmOGLFQfzeudHpFXn3r3zcbkx1Qfz1m0s= X-Forefront-Antispam-Report: CIP:216.228.117.161; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge2.nvidia.com; CAT:NONE; SFS:(13230022)(4636009)(346002)(136003)(376002)(396003)(39860400002)(451199015)(40470700004)(46966006)(36840700001)(5660300002)(8936002)(2906002)(41300700001)(7636003)(356005)(30864003)(40480700001)(40460700003)(8676002)(70206006)(4326008)(47076005)(70586007)(86362001)(316002)(26005)(6636002)(82740400003)(36756003)(110136005)(54906003)(55016003)(7696005)(6666004)(82310400005)(107886003)(478600001)(2616005)(336012)(186003)(1076003)(16526019)(83380400001)(6286002)(426003)(36860700001)(309714004); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 20 Oct 2022 15:42:59.2910 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 9d20ac05-6cd0-41d2-119b-08dab2b1c41b X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.161]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BL02EPF0000C402.namprd05.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DS7PR12MB5982 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Gregory Etelson - Reformat flow integrity item translation for HWS code. - Support flow integrity bits in HWS group 0. - Update integrity item translation to match positive semantics only. Signed-off-by: Gregory Etelson --- drivers/net/mlx5/mlx5_flow.h | 1 + drivers/net/mlx5/mlx5_flow_dv.c | 163 ++++++++++++++++---------------- drivers/net/mlx5/mlx5_flow_hw.c | 8 ++ 3 files changed, 90 insertions(+), 82 deletions(-) diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index 10d4cdb502..8ba3c2ddb1 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -1473,6 +1473,7 @@ struct mlx5_dv_matcher_workspace { struct mlx5_flow_rss_desc *rss_desc; /* RSS descriptor. */ const struct rte_flow_item *tunnel_item; /* Flow tunnel item. */ const struct rte_flow_item *gre_item; /* Flow GRE item. */ + const struct rte_flow_item *integrity_items[2]; }; struct mlx5_flow_split_info { diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c index 42c4231286..5c6ecc4a1a 100644 --- a/drivers/net/mlx5/mlx5_flow_dv.c +++ b/drivers/net/mlx5/mlx5_flow_dv.c @@ -12695,132 +12695,121 @@ flow_dv_aso_age_params_init(struct rte_eth_dev *dev, static void flow_dv_translate_integrity_l4(const struct rte_flow_item_integrity *mask, - const struct rte_flow_item_integrity *value, - void *headers_m, void *headers_v) + void *headers) { + /* + * In HWS mode MLX5_ITEM_UPDATE() macro assigns the same pointer to + * both mask and value, therefore ether can be used. + * In SWS SW_V mode mask points to item mask and value points to item + * spec. Integrity item value is used only if matching mask is set. + * Use mask reference here to keep SWS functionality. + */ if (mask->l4_ok) { /* RTE l4_ok filter aggregates hardware l4_ok and * l4_checksum_ok filters. * Positive RTE l4_ok match requires hardware match on both L4 * hardware integrity bits. - * For negative match, check hardware l4_checksum_ok bit only, - * because hardware sets that bit to 0 for all packets - * with bad L4. + * PMD supports positive integrity item semantics only. */ - if (value->l4_ok) { - MLX5_SET(fte_match_set_lyr_2_4, headers_m, l4_ok, 1); - MLX5_SET(fte_match_set_lyr_2_4, headers_v, l4_ok, 1); - } - MLX5_SET(fte_match_set_lyr_2_4, headers_m, l4_checksum_ok, 1); - MLX5_SET(fte_match_set_lyr_2_4, headers_v, l4_checksum_ok, - !!value->l4_ok); - } - if (mask->l4_csum_ok) { - MLX5_SET(fte_match_set_lyr_2_4, headers_m, l4_checksum_ok, 1); - MLX5_SET(fte_match_set_lyr_2_4, headers_v, l4_checksum_ok, - value->l4_csum_ok); + MLX5_SET(fte_match_set_lyr_2_4, headers, l4_ok, 1); + MLX5_SET(fte_match_set_lyr_2_4, headers, l4_checksum_ok, 1); + } else if (mask->l4_csum_ok) { + MLX5_SET(fte_match_set_lyr_2_4, headers, l4_checksum_ok, 1); } } static void flow_dv_translate_integrity_l3(const struct rte_flow_item_integrity *mask, - const struct rte_flow_item_integrity *value, - void *headers_m, void *headers_v, bool is_ipv4) + void *headers, bool is_ipv4) { + /* + * In HWS mode MLX5_ITEM_UPDATE() macro assigns the same pointer to + * both mask and value, therefore ether can be used. + * In SWS SW_V mode mask points to item mask and value points to item + * spec. Integrity item value used only if matching mask is set. + * Use mask reference here to keep SWS functionality. + */ if (mask->l3_ok) { /* RTE l3_ok filter aggregates for IPv4 hardware l3_ok and * ipv4_csum_ok filters. * Positive RTE l3_ok match requires hardware match on both L3 * hardware integrity bits. - * For negative match, check hardware l3_csum_ok bit only, - * because hardware sets that bit to 0 for all packets - * with bad L3. + * PMD supports positive integrity item semantics only. */ + MLX5_SET(fte_match_set_lyr_2_4, headers, l3_ok, 1); if (is_ipv4) { - if (value->l3_ok) { - MLX5_SET(fte_match_set_lyr_2_4, headers_m, - l3_ok, 1); - MLX5_SET(fte_match_set_lyr_2_4, headers_v, - l3_ok, 1); - } - MLX5_SET(fte_match_set_lyr_2_4, headers_m, + MLX5_SET(fte_match_set_lyr_2_4, headers, ipv4_checksum_ok, 1); - MLX5_SET(fte_match_set_lyr_2_4, headers_v, - ipv4_checksum_ok, !!value->l3_ok); - } else { - MLX5_SET(fte_match_set_lyr_2_4, headers_m, l3_ok, 1); - MLX5_SET(fte_match_set_lyr_2_4, headers_v, l3_ok, - value->l3_ok); } - } - if (mask->ipv4_csum_ok) { - MLX5_SET(fte_match_set_lyr_2_4, headers_m, ipv4_checksum_ok, 1); - MLX5_SET(fte_match_set_lyr_2_4, headers_v, ipv4_checksum_ok, - value->ipv4_csum_ok); + } else if (is_ipv4 && mask->ipv4_csum_ok) { + MLX5_SET(fte_match_set_lyr_2_4, headers, ipv4_checksum_ok, 1); } } static void -set_integrity_bits(void *headers_m, void *headers_v, - const struct rte_flow_item *integrity_item, bool is_l3_ip4) +set_integrity_bits(void *headers, const struct rte_flow_item *integrity_item, + bool is_l3_ip4, uint32_t key_type) { - const struct rte_flow_item_integrity *spec = integrity_item->spec; - const struct rte_flow_item_integrity *mask = integrity_item->mask; + const struct rte_flow_item_integrity *spec; + const struct rte_flow_item_integrity *mask; /* Integrity bits validation cleared spec pointer */ - MLX5_ASSERT(spec != NULL); - if (!mask) - mask = &rte_flow_item_integrity_mask; - flow_dv_translate_integrity_l3(mask, spec, headers_m, headers_v, - is_l3_ip4); - flow_dv_translate_integrity_l4(mask, spec, headers_m, headers_v); + if (MLX5_ITEM_VALID(integrity_item, key_type)) + return; + MLX5_ITEM_UPDATE(integrity_item, key_type, spec, mask, + &rte_flow_item_integrity_mask); + flow_dv_translate_integrity_l3(mask, headers, is_l3_ip4); + flow_dv_translate_integrity_l4(mask, headers); } static void -flow_dv_translate_item_integrity_post(void *matcher, void *key, +flow_dv_translate_item_integrity_post(void *key, const struct rte_flow_item *integrity_items[2], - uint64_t pattern_flags) + uint64_t pattern_flags, uint32_t key_type) { - void *headers_m, *headers_v; + void *headers; bool is_l3_ip4; if (pattern_flags & MLX5_FLOW_ITEM_INNER_INTEGRITY) { - headers_m = MLX5_ADDR_OF(fte_match_param, matcher, - inner_headers); - headers_v = MLX5_ADDR_OF(fte_match_param, key, inner_headers); + headers = MLX5_ADDR_OF(fte_match_param, key, inner_headers); is_l3_ip4 = (pattern_flags & MLX5_FLOW_LAYER_INNER_L3_IPV4) != 0; - set_integrity_bits(headers_m, headers_v, - integrity_items[1], is_l3_ip4); + set_integrity_bits(headers, integrity_items[1], is_l3_ip4, + key_type); } if (pattern_flags & MLX5_FLOW_ITEM_OUTER_INTEGRITY) { - headers_m = MLX5_ADDR_OF(fte_match_param, matcher, - outer_headers); - headers_v = MLX5_ADDR_OF(fte_match_param, key, outer_headers); + headers = MLX5_ADDR_OF(fte_match_param, key, outer_headers); is_l3_ip4 = (pattern_flags & MLX5_FLOW_LAYER_OUTER_L3_IPV4) != 0; - set_integrity_bits(headers_m, headers_v, - integrity_items[0], is_l3_ip4); + set_integrity_bits(headers, integrity_items[0], is_l3_ip4, + key_type); } } -static void +static uint64_t flow_dv_translate_item_integrity(const struct rte_flow_item *item, - const struct rte_flow_item *integrity_items[2], - uint64_t *last_item) + struct mlx5_dv_matcher_workspace *wks, + uint64_t key_type) { - const struct rte_flow_item_integrity *spec = (typeof(spec))item->spec; + if ((key_type & MLX5_SET_MATCHER_SW) != 0) { + const struct rte_flow_item_integrity + *spec = (typeof(spec))item->spec; - /* integrity bits validation cleared spec pointer */ - MLX5_ASSERT(spec != NULL); - if (spec->level > 1) { - integrity_items[1] = item; - *last_item |= MLX5_FLOW_ITEM_INNER_INTEGRITY; + /* SWS integrity bits validation cleared spec pointer */ + if (spec->level > 1) { + wks->integrity_items[1] = item; + wks->last_item |= MLX5_FLOW_ITEM_INNER_INTEGRITY; + } else { + wks->integrity_items[0] = item; + wks->last_item |= MLX5_FLOW_ITEM_OUTER_INTEGRITY; + } } else { - integrity_items[0] = item; - *last_item |= MLX5_FLOW_ITEM_OUTER_INTEGRITY; + /* HWS supports outer integrity only */ + wks->integrity_items[0] = item; + wks->last_item |= MLX5_FLOW_ITEM_OUTER_INTEGRITY; } + return wks->last_item; } /** @@ -13448,6 +13437,10 @@ flow_dv_translate_items(struct rte_eth_dev *dev, flow_dv_translate_item_meter_color(dev, key, items, key_type); last_item = MLX5_FLOW_ITEM_METER_COLOR; break; + case RTE_FLOW_ITEM_TYPE_INTEGRITY: + last_item = flow_dv_translate_item_integrity(items, + wks, key_type); + break; default: break; } @@ -13511,6 +13504,12 @@ flow_dv_translate_items_hws(const struct rte_flow_item *items, if (ret) return ret; } + if (wks.item_flags & MLX5_FLOW_ITEM_INTEGRITY) { + flow_dv_translate_item_integrity_post(key, + wks.integrity_items, + wks.item_flags, + key_type); + } if (wks.item_flags & MLX5_FLOW_LAYER_VXLAN_GPE) { flow_dv_translate_item_vxlan_gpe(key, wks.tunnel_item, @@ -13591,7 +13590,6 @@ flow_dv_translate_items_sws(struct rte_eth_dev *dev, mlx5_flow_get_thread_workspace())->rss_desc, }; struct mlx5_dv_matcher_workspace wks_m = wks; - const struct rte_flow_item *integrity_items[2] = {NULL, NULL}; int ret = 0; int tunnel; @@ -13602,10 +13600,6 @@ flow_dv_translate_items_sws(struct rte_eth_dev *dev, NULL, "item not supported"); tunnel = !!(wks.item_flags & MLX5_FLOW_LAYER_TUNNEL); switch (items->type) { - case RTE_FLOW_ITEM_TYPE_INTEGRITY: - flow_dv_translate_item_integrity(items, integrity_items, - &wks.last_item); - break; case RTE_FLOW_ITEM_TYPE_CONNTRACK: flow_dv_translate_item_aso_ct(dev, match_mask, match_value, items); @@ -13648,9 +13642,14 @@ flow_dv_translate_items_sws(struct rte_eth_dev *dev, return -rte_errno; } if (wks.item_flags & MLX5_FLOW_ITEM_INTEGRITY) { - flow_dv_translate_item_integrity_post(match_mask, match_value, - integrity_items, - wks.item_flags); + flow_dv_translate_item_integrity_post(match_mask, + wks_m.integrity_items, + wks_m.item_flags, + MLX5_SET_MATCHER_SW_M); + flow_dv_translate_item_integrity_post(match_value, + wks.integrity_items, + wks.item_flags, + MLX5_SET_MATCHER_SW_V); } if (wks.item_flags & MLX5_FLOW_LAYER_VXLAN_GPE) { flow_dv_translate_item_vxlan_gpe(match_mask, diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c index 59c5383553..07b58db044 100644 --- a/drivers/net/mlx5/mlx5_flow_hw.c +++ b/drivers/net/mlx5/mlx5_flow_hw.c @@ -4658,6 +4658,14 @@ flow_hw_pattern_validate(struct rte_eth_dev *dev, case RTE_FLOW_ITEM_TYPE_ICMP6: case RTE_FLOW_ITEM_TYPE_CONNTRACK: break; + case RTE_FLOW_ITEM_TYPE_INTEGRITY: + /* + * Integrity flow item validation require access to + * both item mask and spec. + * Current HWS model allows item mask in pattern + * template and item spec in flow rule. + */ + break; case RTE_FLOW_ITEM_TYPE_END: items_end = true; break; -- 2.25.1