From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 90C80A00C2; Wed, 28 Sep 2022 05:34:14 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 6E3E542BC6; Wed, 28 Sep 2022 05:32:39 +0200 (CEST) Received: from NAM12-MW2-obe.outbound.protection.outlook.com (mail-mw2nam12on2063.outbound.protection.outlook.com [40.107.244.63]) by mails.dpdk.org (Postfix) with ESMTP id CF8F242BB5 for ; Wed, 28 Sep 2022 05:32:36 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=FAeU0iy7Riz/3JGx0wiA9R6hM3TsSyvcgj6TCm9vk5kdrJWbdriz4GkOyLz6SSVbAwricYD2eRjzi8RaJY4Eywy9RBQSseGbsSpkXbyyOXzqs2UXH/SQ2UlHXSgju5sxFwlNerHE5OpqQT+xtiVJ8asKhgJk60zjhsxLNxqL6RAbc6SuK45CmHNOklpygVrWCxKyoMt3cBDnCR2ZojHobkihpdYvll/CfmsM2N22Nr6aGIL7CMPpmHmSNCgGP7UmBNrozMJ6sd85l33rlvAeJYGACabkZCdZG9eiBDkBMRGjiE4DpoB8/XMDEbADb6m1xDvyUhLqM4ie7oAEuR1QTQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=CX3CT9zHf1DywqWNkhAb5LttYnIgEF5CihpG2SfthiA=; b=d9CaBVlaUNVsrXBBw4Mmk8W/rZDCgdeYDvLhgBU47QSevYWGaavsuhNzc0inhHH+UbRWNUyjwht9VRftxGrPs8MyBOUpN2z+PAKbmEkhU9fy+nZs60pE83zsngXu//IvAJU8xqkwSNXRmHGDCQzCk5c3PBgE3p7+W2DMhxp8UQJ7yhDRWt/ig3EGz3ls9jJ8H+USDrP3gtP8h7GRqSo30oSND3U3OngyJMTIDpLGfS7oh3CLZifEvkKr6+Owia8xVTDHbGjNCq1AX3E0G4X5YzCC/d35mpM2yhLyqvP29Lil/6YqofkTOusixrgEMpa/cqx87pmf3dbHbuuFqYFfWQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.160) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=CX3CT9zHf1DywqWNkhAb5LttYnIgEF5CihpG2SfthiA=; b=Kt4ngTDDhEX0uRydr0XFzXbCtQCH2nSJobsWDximD7VTiwsbUEHuIU91rA9ex3IDGozjBWnj7tweV91RH81LePowEtS0B1qPkoEgQ+atGjoa9TW/Y3YWHRyOgd+4ODopVj33j3AmqHB5+nUD1I/Dc/r2OzOE5XLJ9SO9Jbl9oybe4r2ixYjHOp5NrzJGgeB/pgNqWJtPNJYaexrug3Rq3fVzaI8vAPZgcE9IFmSrRBt8y8nS9wQC4YVJdgj7584uV+MuJl+GgWh5rXbz2D0prE2jOB0hKDxH+8o29oUNpd6evLhWSqWLsUy0ykERnffER7yzew734xk77VXzasTVsg== Received: from DS7PR03CA0121.namprd03.prod.outlook.com (2603:10b6:5:3b4::6) by BL3PR12MB6473.namprd12.prod.outlook.com (2603:10b6:208:3b9::16) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5654.20; Wed, 28 Sep 2022 03:32:34 +0000 Received: from DM6NAM11FT063.eop-nam11.prod.protection.outlook.com (2603:10b6:5:3b4:cafe::d6) by DS7PR03CA0121.outlook.office365.com (2603:10b6:5:3b4::6) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5654.26 via Frontend Transport; Wed, 28 Sep 2022 03:32:34 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.160) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.160 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.160; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.160) by DM6NAM11FT063.mail.protection.outlook.com (10.13.172.219) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5676.17 via Frontend Transport; Wed, 28 Sep 2022 03:32:34 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.26; Tue, 27 Sep 2022 20:32:21 -0700 Received: from nvidia.com (10.126.230.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.29; Tue, 27 Sep 2022 20:32:19 -0700 From: Suanming Mou To: Matan Azrad , Viacheslav Ovsiienko CC: , , , Gregory Etelson Subject: [PATCH v2 15/17] net/mlx5: support flow integrity in HWS group 0 Date: Wed, 28 Sep 2022 06:31:28 +0300 Message-ID: <20220928033130.9106-16-suanmingm@nvidia.com> X-Mailer: git-send-email 2.18.1 In-Reply-To: <20220928033130.9106-1-suanmingm@nvidia.com> References: <20220923144334.27736-1-suanmingm@nvidia.com> <20220928033130.9106-1-suanmingm@nvidia.com> MIME-Version: 1.0 Content-Type: text/plain X-Originating-IP: [10.126.230.35] X-ClientProxiedBy: rnnvmail201.nvidia.com (10.129.68.8) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DM6NAM11FT063:EE_|BL3PR12MB6473:EE_ X-MS-Office365-Filtering-Correlation-Id: 9714ea45-161d-48ba-c9a8-08daa1021573 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: JFhhIpIAIgZBAIUt/c7+KDJNEXY41gr0QLVJGGqS4PL2ku/lCel0IjK4zNB63o+RkSFRSN0V/qmerlozDVdmY9Hiqzx6iK6j0Y7oFHKD9nH4o9ez2Xd8A0AsBQlqS371gR7FreLbUX8t90AjgXkJQCW7w7jln+d/UR0Frc/Y1D5wuEbQaGwVWs+A2RMW+w4WQ1NNI1z1FwsWHT/ejJaPoINNR7K1sS8yRgss/BzuTOipqpd2hmb/9q1FPdV8xI7ua9wMs+p0rrbeZhZDU4pT1BS4yMXb95E2TIenBBgJQ9nxHWPGBWcHlBKlQCclaQyO3ggzeTyTUbw/Oe5f5CNUgddbkrqRSnoVxzy1sMLJSVKEt3koDZpCHUZH78CMjJwZAPEGb+qMMggZo9o3fdmf0mENyIF33T+7EuDBeuWfZqgzkk24uM2C8M5V/LgnY3UFLsi9E8WPl/9cea5/ZqVRwnzAhWwv2KCC412ngNMuPHYlhvHmQTbQS8pKpNxgB48fPKy+gJTxHJDVxi3ehC1EhtHPu26TMQ6BZjo9jQe/WHcrHnPfaHpMpz5YDTzjOihpByLcBk7AV1KubNHSRlohFxei58ITAl0a/DG63+eT/2FPNidfDPQl48W4nmkXzOFdBNcvVB9OyzYNlnLm8WJfoLrbqrUPWcfWFFx+7p8F+LaVO09ZPDXiSQ0sUs4+NVVdcrddPuWo8ocI0EkFaoDb+pIqj2gegV2qv39mBblOh6niXF73CXDexUHG1umrJVHGjPIVtLhqAHKrA8ZZpdk4piYUMQMJINLoUhf1Slk/yIA= X-Forefront-Antispam-Report: CIP:216.228.117.160; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge1.nvidia.com; CAT:NONE; SFS:(13230022)(4636009)(376002)(346002)(396003)(136003)(39860400002)(451199015)(46966006)(40470700004)(36840700001)(36756003)(40460700003)(40480700001)(82310400005)(55016003)(82740400003)(41300700001)(186003)(426003)(16526019)(6666004)(7696005)(107886003)(70206006)(47076005)(2616005)(8676002)(336012)(8936002)(4326008)(1076003)(5660300002)(2906002)(83380400001)(54906003)(86362001)(30864003)(36860700001)(70586007)(6286002)(478600001)(6636002)(110136005)(26005)(356005)(7636003)(316002)(309714004); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 28 Sep 2022 03:32:34.6700 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 9714ea45-161d-48ba-c9a8-08daa1021573 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.160]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DM6NAM11FT063.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BL3PR12MB6473 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Gregory Etelson - Reformat flow integrity item translation for HWS code. - Support flow integrity bits in HWS group 0. - Update integrity item translation to match positive semantics only. Positive flow semantics was described in patch [ae37c0f60c]. Signed-off-by: Gregory Etelson --- drivers/net/mlx5/mlx5_flow.h | 1 + drivers/net/mlx5/mlx5_flow_dv.c | 163 ++++++++++++++++---------------- drivers/net/mlx5/mlx5_flow_hw.c | 8 ++ 3 files changed, 90 insertions(+), 82 deletions(-) diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index e45869a890..3f4aa080bb 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -1462,6 +1462,7 @@ struct mlx5_dv_matcher_workspace { struct mlx5_flow_rss_desc *rss_desc; /* RSS descriptor. */ const struct rte_flow_item *tunnel_item; /* Flow tunnel item. */ const struct rte_flow_item *gre_item; /* Flow GRE item. */ + const struct rte_flow_item *integrity_items[2]; }; struct mlx5_flow_split_info { diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c index 085cb23c78..758672568c 100644 --- a/drivers/net/mlx5/mlx5_flow_dv.c +++ b/drivers/net/mlx5/mlx5_flow_dv.c @@ -12636,132 +12636,121 @@ flow_dv_aso_age_params_init(struct rte_eth_dev *dev, static void flow_dv_translate_integrity_l4(const struct rte_flow_item_integrity *mask, - const struct rte_flow_item_integrity *value, - void *headers_m, void *headers_v) + void *headers) { + /* + * In HWS mode MLX5_ITEM_UPDATE() macro assigns the same pointer to + * both mask and value, therefore ether can be used. + * In SWS SW_V mode mask points to item mask and value points to item + * spec. Integrity item value is used only if matching mask is set. + * Use mask reference here to keep SWS functionality. + */ if (mask->l4_ok) { /* RTE l4_ok filter aggregates hardware l4_ok and * l4_checksum_ok filters. * Positive RTE l4_ok match requires hardware match on both L4 * hardware integrity bits. - * For negative match, check hardware l4_checksum_ok bit only, - * because hardware sets that bit to 0 for all packets - * with bad L4. + * PMD supports positive integrity item semantics only. */ - if (value->l4_ok) { - MLX5_SET(fte_match_set_lyr_2_4, headers_m, l4_ok, 1); - MLX5_SET(fte_match_set_lyr_2_4, headers_v, l4_ok, 1); - } - MLX5_SET(fte_match_set_lyr_2_4, headers_m, l4_checksum_ok, 1); - MLX5_SET(fte_match_set_lyr_2_4, headers_v, l4_checksum_ok, - !!value->l4_ok); - } - if (mask->l4_csum_ok) { - MLX5_SET(fte_match_set_lyr_2_4, headers_m, l4_checksum_ok, 1); - MLX5_SET(fte_match_set_lyr_2_4, headers_v, l4_checksum_ok, - value->l4_csum_ok); + MLX5_SET(fte_match_set_lyr_2_4, headers, l4_ok, 1); + MLX5_SET(fte_match_set_lyr_2_4, headers, l4_checksum_ok, 1); + } else if (mask->l4_csum_ok) { + MLX5_SET(fte_match_set_lyr_2_4, headers, l4_checksum_ok, 1); } } static void flow_dv_translate_integrity_l3(const struct rte_flow_item_integrity *mask, - const struct rte_flow_item_integrity *value, - void *headers_m, void *headers_v, bool is_ipv4) + void *headers, bool is_ipv4) { + /* + * In HWS mode MLX5_ITEM_UPDATE() macro assigns the same pointer to + * both mask and value, therefore ether can be used. + * In SWS SW_V mode mask points to item mask and value points to item + * spec. Integrity item value used only if matching mask is set. + * Use mask reference here to keep SWS functionality. + */ if (mask->l3_ok) { /* RTE l3_ok filter aggregates for IPv4 hardware l3_ok and * ipv4_csum_ok filters. * Positive RTE l3_ok match requires hardware match on both L3 * hardware integrity bits. - * For negative match, check hardware l3_csum_ok bit only, - * because hardware sets that bit to 0 for all packets - * with bad L3. + * PMD supports positive integrity item semantics only. */ + MLX5_SET(fte_match_set_lyr_2_4, headers, l3_ok, 1); if (is_ipv4) { - if (value->l3_ok) { - MLX5_SET(fte_match_set_lyr_2_4, headers_m, - l3_ok, 1); - MLX5_SET(fte_match_set_lyr_2_4, headers_v, - l3_ok, 1); - } - MLX5_SET(fte_match_set_lyr_2_4, headers_m, + MLX5_SET(fte_match_set_lyr_2_4, headers, ipv4_checksum_ok, 1); - MLX5_SET(fte_match_set_lyr_2_4, headers_v, - ipv4_checksum_ok, !!value->l3_ok); - } else { - MLX5_SET(fte_match_set_lyr_2_4, headers_m, l3_ok, 1); - MLX5_SET(fte_match_set_lyr_2_4, headers_v, l3_ok, - value->l3_ok); } - } - if (mask->ipv4_csum_ok) { - MLX5_SET(fte_match_set_lyr_2_4, headers_m, ipv4_checksum_ok, 1); - MLX5_SET(fte_match_set_lyr_2_4, headers_v, ipv4_checksum_ok, - value->ipv4_csum_ok); + } else if (is_ipv4 && mask->ipv4_csum_ok) { + MLX5_SET(fte_match_set_lyr_2_4, headers, ipv4_checksum_ok, 1); } } static void -set_integrity_bits(void *headers_m, void *headers_v, - const struct rte_flow_item *integrity_item, bool is_l3_ip4) +set_integrity_bits(void *headers, const struct rte_flow_item *integrity_item, + bool is_l3_ip4, uint32_t key_type) { - const struct rte_flow_item_integrity *spec = integrity_item->spec; - const struct rte_flow_item_integrity *mask = integrity_item->mask; + const struct rte_flow_item_integrity *spec; + const struct rte_flow_item_integrity *mask; /* Integrity bits validation cleared spec pointer */ - MLX5_ASSERT(spec != NULL); - if (!mask) - mask = &rte_flow_item_integrity_mask; - flow_dv_translate_integrity_l3(mask, spec, headers_m, headers_v, - is_l3_ip4); - flow_dv_translate_integrity_l4(mask, spec, headers_m, headers_v); + if (MLX5_ITEM_VALID(integrity_item, key_type)) + return; + MLX5_ITEM_UPDATE(integrity_item, key_type, spec, mask, + &rte_flow_item_integrity_mask); + flow_dv_translate_integrity_l3(mask, headers, is_l3_ip4); + flow_dv_translate_integrity_l4(mask, headers); } static void -flow_dv_translate_item_integrity_post(void *matcher, void *key, +flow_dv_translate_item_integrity_post(void *key, const struct rte_flow_item *integrity_items[2], - uint64_t pattern_flags) + uint64_t pattern_flags, uint32_t key_type) { - void *headers_m, *headers_v; + void *headers; bool is_l3_ip4; if (pattern_flags & MLX5_FLOW_ITEM_INNER_INTEGRITY) { - headers_m = MLX5_ADDR_OF(fte_match_param, matcher, - inner_headers); - headers_v = MLX5_ADDR_OF(fte_match_param, key, inner_headers); + headers = MLX5_ADDR_OF(fte_match_param, key, inner_headers); is_l3_ip4 = (pattern_flags & MLX5_FLOW_LAYER_INNER_L3_IPV4) != 0; - set_integrity_bits(headers_m, headers_v, - integrity_items[1], is_l3_ip4); + set_integrity_bits(headers, integrity_items[1], is_l3_ip4, + key_type); } if (pattern_flags & MLX5_FLOW_ITEM_OUTER_INTEGRITY) { - headers_m = MLX5_ADDR_OF(fte_match_param, matcher, - outer_headers); - headers_v = MLX5_ADDR_OF(fte_match_param, key, outer_headers); + headers = MLX5_ADDR_OF(fte_match_param, key, outer_headers); is_l3_ip4 = (pattern_flags & MLX5_FLOW_LAYER_OUTER_L3_IPV4) != 0; - set_integrity_bits(headers_m, headers_v, - integrity_items[0], is_l3_ip4); + set_integrity_bits(headers, integrity_items[0], is_l3_ip4, + key_type); } } -static void +static uint64_t flow_dv_translate_item_integrity(const struct rte_flow_item *item, - const struct rte_flow_item *integrity_items[2], - uint64_t *last_item) + struct mlx5_dv_matcher_workspace *wks, + uint64_t key_type) { - const struct rte_flow_item_integrity *spec = (typeof(spec))item->spec; + if ((key_type & MLX5_SET_MATCHER_SW) != 0) { + const struct rte_flow_item_integrity + *spec = (typeof(spec))item->spec; - /* integrity bits validation cleared spec pointer */ - MLX5_ASSERT(spec != NULL); - if (spec->level > 1) { - integrity_items[1] = item; - *last_item |= MLX5_FLOW_ITEM_INNER_INTEGRITY; + /* SWS integrity bits validation cleared spec pointer */ + if (spec->level > 1) { + wks->integrity_items[1] = item; + wks->last_item |= MLX5_FLOW_ITEM_INNER_INTEGRITY; + } else { + wks->integrity_items[0] = item; + wks->last_item |= MLX5_FLOW_ITEM_OUTER_INTEGRITY; + } } else { - integrity_items[0] = item; - *last_item |= MLX5_FLOW_ITEM_OUTER_INTEGRITY; + /* HWS supports outer integrity only */ + wks->integrity_items[0] = item; + wks->last_item |= MLX5_FLOW_ITEM_OUTER_INTEGRITY; } + return wks->last_item; } /** @@ -13389,6 +13378,10 @@ flow_dv_translate_items(struct rte_eth_dev *dev, flow_dv_translate_item_meter_color(dev, key, items, key_type); last_item = MLX5_FLOW_ITEM_METER_COLOR; break; + case RTE_FLOW_ITEM_TYPE_INTEGRITY: + last_item = flow_dv_translate_item_integrity(items, + wks, key_type); + break; default: break; } @@ -13452,6 +13445,12 @@ flow_dv_translate_items_hws(const struct rte_flow_item *items, if (ret) return ret; } + if (wks.item_flags & MLX5_FLOW_ITEM_INTEGRITY) { + flow_dv_translate_item_integrity_post(key, + wks.integrity_items, + wks.item_flags, + key_type); + } if (wks.item_flags & MLX5_FLOW_LAYER_VXLAN_GPE) { flow_dv_translate_item_vxlan_gpe(key, wks.tunnel_item, @@ -13532,7 +13531,6 @@ flow_dv_translate_items_sws(struct rte_eth_dev *dev, mlx5_flow_get_thread_workspace())->rss_desc, }; struct mlx5_dv_matcher_workspace wks_m = wks; - const struct rte_flow_item *integrity_items[2] = {NULL, NULL}; int ret = 0; int tunnel; @@ -13543,10 +13541,6 @@ flow_dv_translate_items_sws(struct rte_eth_dev *dev, NULL, "item not supported"); tunnel = !!(wks.item_flags & MLX5_FLOW_LAYER_TUNNEL); switch (items->type) { - case RTE_FLOW_ITEM_TYPE_INTEGRITY: - flow_dv_translate_item_integrity(items, integrity_items, - &wks.last_item); - break; case RTE_FLOW_ITEM_TYPE_CONNTRACK: flow_dv_translate_item_aso_ct(dev, match_mask, match_value, items); @@ -13588,9 +13582,14 @@ flow_dv_translate_items_sws(struct rte_eth_dev *dev, return -rte_errno; } if (wks.item_flags & MLX5_FLOW_ITEM_INTEGRITY) { - flow_dv_translate_item_integrity_post(match_mask, match_value, - integrity_items, - wks.item_flags); + flow_dv_translate_item_integrity_post(match_mask, + wks_m.integrity_items, + wks_m.item_flags, + MLX5_SET_MATCHER_SW_M); + flow_dv_translate_item_integrity_post(match_value, + wks.integrity_items, + wks.item_flags, + MLX5_SET_MATCHER_SW_V); } if (wks.item_flags & MLX5_FLOW_LAYER_VXLAN_GPE) { flow_dv_translate_item_vxlan_gpe(match_mask, diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c index 1879c8e9ca..31f98a2636 100644 --- a/drivers/net/mlx5/mlx5_flow_hw.c +++ b/drivers/net/mlx5/mlx5_flow_hw.c @@ -4618,6 +4618,14 @@ flow_hw_pattern_validate(struct rte_eth_dev *dev, case RTE_FLOW_ITEM_TYPE_ICMP6: case RTE_FLOW_ITEM_TYPE_CONNTRACK: break; + case RTE_FLOW_ITEM_TYPE_INTEGRITY: + /* + * Integrity flow item validation require access to + * both item mask and spec. + * Current HWS model allows item mask in pattern + * template and item spec in flow rule. + */ + break; case RTE_FLOW_ITEM_TYPE_END: items_end = true; break; -- 2.25.1